report
stringlengths
320
1.32M
summary
stringlengths
127
13.7k
Dramatic changes that occurred in the world as a result of the end of the Cold War and the dissolution of the Soviet Union have fundamentally altered the United States’ security needs. In March 1993, DOD initiated a comprehensive review to define and redesign the nation’s defense strategy, force structure, modernization, infrastructure, and budgets “from the bottom up.” The report of the Bottom-Up Review, issued in October 1993, concluded that DOD could reduce its forces and infrastructure from a posture designed to meet a global Soviet threat to one that focuses on potential regional conflicts. In our review of the 1995 FYDP, the first FYDP to reflect the implementation of the Bottom-Up Review strategy, we concluded that DOD’s major planning assumptions relied too heavily on optimistic cost estimates and potential savings. As a result, it had not gone far enough to meet economic realities, thus leaving its new plan with more programs than proposed budgets would support. This included approximately $20 billion in overprogramming, which DOD identified in the 1995 FYDP as undistributed future adjustments. The 1995 FYDP, which totaled $1,240 billion, represented DOD’s 5-year program plan through fiscal year 1999. The 1996 FYDP, which totals $1,544 billion, covers the 6-year period from fiscal year 1996 through fiscal year 2001. The 1996 plan overlaps the 1995 plan for the years 1996-99. Table 1 compares the two plans by primary appropriation account. The shaded area represents the common years to both plans. Our analysis of the two FYDPs shows that during the 4 common years, the budget increases by about $3 billion annually. In addition, DOD reduced the 1996 FYDP for the $20 billion in undistributed future adjustments included in the 1995 FYDP. These reductions were made primarily in the procurement account. The largest changes from one year to the next in the 1996 FYDP occur during the last 2 years of the plan when the budget is projected to increase by about $10 billion from 1999 to 2000 and by another $10.4 billion from 2000 to 2001. This represents about a 1-percent real increase after inflation for those years. According to the Secretary of Defense, the 1996 FYDP emphasizes readiness and quality-of-life programs. As such, the Secretary increased the budgeted amounts for the operation and maintenance, military personnel, and family housing accounts from the 1995 FYDP to the 1996 FYDP as shown in table 1. The following sections discuss some of the more significant changes in each of the primary appropriation accounts. The 1995 FYDP proposed holding military pay raises below the amount included in current law, about 1.6 percent versus 2.6 percent. According to DOD, the 1996 FYDP funds the full military pay raises provided for under law through 1999. About $7.3 billion of the $8.7 billion of additional funds proposed for the military personnel account is to cover the planned pay raises. Table 2 shows a comparison of the military personnel account in the 1995 and 1996 FYDPs. As table 3 shows, the operation and maintenance account is projected to increase by a total of about $10 billion for the common years of the 1995 and 1996 FYDPs. The budgeted amounts for many operation and maintenance programs changed from the 1995 to the 1996 FYDP resulting in the net increase of $10 billion during the 4 common years, 1996-99. Our review shows that the largest increases were in the base operations and management headquarters functions. These functions include child care and development, family centers, base communications, real property services, environmental programs, and other infrastructure-related activities. For example, Army base operations and management headquarters, including maintenance and repair activities funding show a net increase of about $3 billion. Similarly, the Navy’s base operations, operations support, and management headquarters activities show a net increase of over $2 billion. Similar Air Force accounts show a net increase in these functions of about $2 billion. The 1995 FYDP contained undistributed future adjustments of about $20 billion. Because of the magnitude of the decrease in the procurement account from the 1995 to the 1996 FYDP, it is evident that most of these adjustments were taken from the procurement account. As table 4 shows, the procurement account decreased by almost $27 billion for the common years in the 1995 and 1996 FYDPs. DOD decreased the procurement account in the 4 common years by stretching out the planned buys for some systems to the year 2000 and beyond and by reducing the total acquisition quantities for others. For example, according to the 1995 FYDP, Defense planned to procure one LPD-17 amphibious ship in 1996, two in 1998, and two in 1999. This procurement schedule slipped in the 1996 FYDP to one ship in 1998, two in 2000, and two in 2001. Also, the F-22 procurement program was slipped 1 year so that the 12 aircraft that were to be procured in 1999, according to the 1995 FYDP, are now programmed to be procured in 2000. The total planned procurement quantities were reduced for other programs, including the F/A-18C/D fighter aircraft and the Navy’s Tomahawk missile. Appendix I shows 14 of the more significant procurement program deferrals or reductions relative to last year’s FYDP. The 14 programs account for about $14.7 billion, or 54 percent, of the approximately $27 billion in reductions to the procurement account. The decrease in procurement dollars during the 4 common years of the 1995 and 1996 FYDPs comes on top of an already steep decline in procurement that began in the mid-1980s. The 1996 procurement budget request is $39.4 billion, which when adjusted for inflation, is a decline of 71 percent from fiscal year 1985. The implication of this trend is that future years’ budgets will eventually have to accommodate a recapitalization of equipment and weapon systems. DOD plans to reverse this trend and increase its procurement budgets starting in fiscal year 1997. Figure 1 shows the sharp decline in the procurement account from fiscal years 1985 to 1996 and, as indicated by the dotted line, DOD’s proposed increase from fiscal years 1997 through 2001. Budget authority is the authority to incur legally binding obligations of the government that will result in immediate or future outlays. Most Defense budget authority is provided by Congress in the form of enacted appropriations. According to the Secretary of Defense, future modernization funds will come from savings achieved through infrastructure reductions and acquisition reforms and from larger future Defense budgets. Significant spending increases are planned in the last 2 years of the 1996 FYDP. Specifically, procurement funding estimates are 15 and 24 percent greater in 2000 and 2001 compared with 1999. Congressional action may result in increasing near-term funding for defense, which could mitigate the need for DOD to increase out-year budgets. The June 1995 Concurrent Resolution on the Budget for Fiscal Year 1996 includes over $24 billion more for defense than the President’s budget for fiscal years 1996-2001. The additional funds are expected, in part, to lessen the need for DOD to reduce or defer weapon modernization programs to meet other near-term readiness requirements. Assuming the funds are appropriated, Congress will specify how defense is to spend some of the added funds, but DOD may have an opportunity to restore some programs that were reduced or deferred in the 1996 FYDP. The Concurrent Resolution on the Budget is discussed further in a later section. As table 5 shows, the research, development, test, and evaluation account increased by $1.6 billion during the common years of the 1995 and 1996 FYDPs. The budgeted amounts for many research and development programs changed from the 1995 FYDP to the 1996 FYDP. Two programs that are projected to receive some of the largest funding increases over the 1995-99 period are special classified programs, which increased by about $1.8 billion, and the F-22 advanced fighter aircraft engineering and manufacturing development, which increased by about $700 million. Two programs that are budgeted substantially less are the defense reinvestment program, which decreased by about $1 billion, and the Comanche helicopter development program, which decreased by about $700 million. Overall, the comparison of the 1995 and 1996 FYDPs for the common years 1996-99 shows that programs in the latter stages of development are receiving increased funds, while those in the earlier stages of development are receiving less funds. For example, programs in demonstration and validation, engineering and manufacturing development, and operational systems development increased by about $7.1 billion while programs in basic research, exploratory development, and advanced development decreased by about $5.4 billion. A large part of the shift was in ballistic missile defense programs from earlier development to later stages of development. As table 6 shows, the 1996 FYDP budgets less for military construction than was planned in the 1995 FYDP. The table shows that although the biggest reduction is projected to occur in fiscal year 1996, the funds continue to decrease through 1998, increase slightly in 1999, and drop below $4 billion in 2000 and 2001. Table 7 shows that, over the common years of the 1995 and 1996 FYDPs, the family housing account increases by about $2.5 billion. According to DOD, worldwide military housing is inadequate and needs to be improved. Most of the funding increases in the 1996 FYDP are for operation and maintenance, new construction, and improvements to DOD’s family housing. DOD anticipates that the realignment and closure of unneeded military bases and facilities resulting from the four rounds of closures since 1988 and force structure reductions will result in substantial savings. Our analysis of the 1996 FYDP shows that savings that have accrued or are expected to accrue from the base closings and force reductions appear to have been offset by increased infrastructure funding requirements primarily for base operations and management headquarters functions and quality-of-life programs. Thus, the proportion of infrastructure funding in the total defense budget in 2001 is expected to be about the same as it was reported for fiscal year 1994 in DOD’s Bottom-Up Review report. DOD stated in its Bottom-Up Review report that $160 billion, or approximately 59 percent, of its total obligational authority for fiscal year 1994 was required to fund infrastructure activities. These activities include logistics support, medical treatment and facilities, personnel costs, including a wide range of dependent support programs, formal training, and installation support such as base operations, acquisition management, and force management. The Bottom-Up Review report noted that a key defense objective was to reduce this infrastructure without harming readiness. Figure 2 shows a breakdown of these infrastructure categories for fiscal year 1994 as displayed in DOD’s report. Using the infrastructure categories identified by DOD, we calculated the amount of infrastructure funding for fiscal years 1995 through 2001. Table 8 shows that, on the basis of current program plans, infrastructure funding (as a percentage of DOD’s total budget) stays relatively stable through 2001 and shows no improvement over the 59-percent infrastructure level DOD reported for fiscal year 1994. According to DOD’s Bottom-Up Review report, approximately 40 percent of infrastructure funding such as for training, supply, and transportation are tied directly to force structure and would be expected to decline with force structure reductions. Historically, savings resulting from force structure reductions lag a few years behind. On the basis of this, and because DOD’s planned drawdown of forces is essentially complete in fiscal year 1996, the 1996 FYDP should begin to reflect some significant infrastructure savings. However, FYDP estimates include the costs of new requirements as well as anticipated savings. Our analysis indicates that increases in personnel, operation and maintenance, research and development, and family housing, which include increases in infrastructure costs, appear to offset most planned infrastructure savings through 2001. As a result, the 1996 FYDP does not show the decline in the proportion of infrastructure funding that might be expected. The concurrent budget resolution approved by both the Senate and the House in June 1995 anticipates $35.6 billion more funding for national defense over the 1996-99 period than the President’s budget request. However, as shown in table 9, the resolution would reduce the President’s proposed budgets for 2000-2001 by a total of $11.4 billion. The net effect of these adjustments is a $24.2-billion increase over the period. These estimates include funding for DOD military, atomic energy defense activities, and defense-related activities. According to the conference agreement on the budget resolution, most of the increase for DOD in 1996 is assumed to be used for the procurement of weapons and research and development activities. For the period 1997 through 2001, budget authority increases are assumed to be split equally between procurement and operation and maintenance. In providing additional defense funds, it is the intent of the conferees to lessen the need for decisionmakers to sacrifice future readiness to meet current readiness requirements. When the funds are appropriated, Congress will undoubtedly specify how DOD is to spend some of the added funds. For example, the House bill on the defense authorization act for fiscal year 1996 would add programs such as the B-2 bomber, which DOD did not request and for which DOD would have to find funding in the future. Also, the Senate’s 1996 authorization bill would significantly increase DOD’s proposed funding for a missile defense system. In addition, DOD may have an opportunity to restore some of the programs that it reduced or deferred to the year 2000 and beyond. The Secretary of Defense shall submit to Congress each year, at or about the time that the President’s budget is submitted . . . a future-years defense program . . . reflecting the estimated expenditures and proposed appropriations included in that budget. The provision requires that program and budget information submitted to Congress by DOD be consistent with the President’s budget. The President’s fiscal year 1996 budget was submitted to Congress on February 6, 1995. The 1996 FYDP was submitted to Congress on March 29, 1995, and was accompanied by a written certification by the Secretary of Defense that the FYDP and associated annexes satisfied the requirements of section 221 of title 10, United States Code. This certification was made after consultation with the DOD Inspector General. On the basis of our review, we consider the FYDP estimates to be consistent with the President’s budget submission. Therefore, in our opinion, the fiscal year 1996 FYDP was submitted in compliance with all applicable legislative requirements. In commenting on this report, DOD stated that we had fairly and accurately assessed the funding adjustments it made to balance the program plans for fiscal years 1996-2001. DOD also stated that the report correctly identified the fiscal implications of funding priorities and strategies that guided the preparation of the 1996-2001 program. We reported that infrastructure funding, as a proportion of the defense budget, is relatively constant from 1995 through 2001. DOD agreed and said that it would be incorrect to infer from this finding that the Department is failing to achieve savings from a smaller infrastructure and applying them to higher priority activities like readiness, quality-of-life, and procurement. We agree with DOD in part. Our analysis shows that infrastructure savings that have occurred have been applied toward new infrastructure requirements, but not to weapons procurement or modernization in any appreciable amounts. DOD expressed concern that it may not be able to accelerate the procurement of programs that are already in the defense program if Congress directs that additional funding be used to acquire new programs with large out-year funding requirements. The full text of DOD’s comments are included as appendix II. To evaluate the major planning assumptions underlying DOD’s fiscal year 1996 FYDP, we interviewed officials in the Office of the DOD Comptroller, the Office of Program Analysis and Evaluation, Office of Environmental Security, Base Closure and Utilization Office, and the Congressional Budget Office. We examined a variety of DOD planning and budget documents, including the FYDP and associated annexes. We also reviewed the President’s fiscal year 1996 budget submission; the fiscal year 1996 concurrent budget resolution; our prior reports; and pertinent reports by the Congressional Budget Office, the Congressional Research Service, and others. To calculate the amount of infrastructure funding for fiscal years 1995 through 2001 we used the infrastructure definitions and categories provided by DOD. Results of our infrastructure analysis were provided to cognizant DOD officials within the Office of Program Analysis and Evaluation for validation and comment. Department officials stated the analysis was correct on the basis of the definitions and categories established in 1994. However, they also stated they were redefining some infrastructure activities and categories that may change the results. DOD would not provide us with the details supporting these new categories during our review so we were unable to evaluate them. To determine whether the FYDP submission complies with the law, we compared its content with the requirements established in section 221 of title 10 of the United States Code and section 1005 of the Defense Authorization Act for fiscal year 1995. We also reviewed references to the reporting requirement in various legislative reports to clarify congressional intent. Our work was conducted from March to August 1995 in accordance with generally accepted government auditing standards. We are providing copies of this report to appropriate House and Senate Committees; the Secretaries of Defense, the Air Force, the Army, and the Navy; and the Director, Office of Management and Budget. We will also provide copies to other interested parties upon request. If you have any questions concerning this report, please call me on (202) 512-3504. Major contributors to this report are listed in appendix III. Table I.1 shows 14 of the more significant procurement program changes for 1996-99 from the 1995 FYDP to the 1996 FYDP. The 14 programs account for about $14.7 billion, or 54 percent, of the approximately $27 billion in reductions to the procurement account. Table I.1: Selected Procurement Program Deferrals or Reductions for 1996-99 From the 1995 FYDP to the 1996 FYDP Although DOD is to decide later this year whether to procure more than 40 C-17s or a commercial alternative, it has reduced the amount of funds available for this procurement by $1.3 billion over the 1996-99 period. Joint Air Force and Navy program. The funds are to be used for the procurement of modification parts. DOD Budget: Selected Categories of Planned Funding for Fiscal Years 1995-99 (GAO/NSIAD-95-92, Feb. 17, 1995). Future Years Defense Program: Optimistic Estimates Lead to Billions in Overprogramming (GAO/NSIAD-94-210, July 29, 1994). DOD Budget: Future Years Defense Program Needs Details Based on Comprehensive Review (GAO/NSIAD-93-250, Aug. 20, 1993). Transition Series: National Security Issues (GAO/OCG-93-9TR, Dec. 1992). High Risk Series: Defense Weapons Systems Acquisition (GAO/HR-93-7, Dec. 1992). Weapons Acquisition: Implementation of the 1991 DOD Full Funding Policy (GAO/NSIAD-92-238, Sept. 24, 1992). Defense Budget and Program Issues Facing the 102nd Congress (GAO/T-NSIAD-91-21, Apr. 25, 1991). DOD Budget: Observations on the Future Years Defense Program (GAO/NSIAD-91-204, Apr. 25, 1991). Department of Defense: Improving Management to Meet the Challenges of the 1990s (GAO/T-NSIAD-90-57, July 25, 1990). DOD Budget: Comparison of Updated Five-Year Plan With President’s Budget (GAO/NSIAD-90-211BR, June 13, 1990). DOD’s Budget Status: Fiscal Years 1990-94 Budget Reduction Decisions Still Pending (GAO/NSIAD-90-125BR, Feb. 22, 1990). Status of Defense Forces and Five Year Defense Planning and Funding Implications (GAO/T-NSIAD-89-29, May 10, 1989). Transition Series: Defense Issues (GAO/OCG-89-9TR, Nov. 1988). Defense Budget and Program Issues: Fiscal Year 1989 Budget (GAO/T-NSIAD-88-18, Mar. 14, 1988). Underestimation of Funding Requirements in Five Year Procurement Plans (GAO/NSIAD-84-88, Mar. 12, 1984). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO compared the Department of Defense's (DOD) fiscal year (FY) 1996 Future Years Defense Program (FYDP) with its FY 1995 FYDP, focusing on: (1) what major program adjustments DOD made from FY 1995 to FY 1996; (2) the implications of these changes for the future; and (3) whether the FY 1996 FYDP complies with statutory requirements. GAO found that: (1) the FY 1996 FYDP, which covers FYs 1996-2001, is considerably different from the 1995 FYDP, which covers FYs 1995-99; (2) the total program increased by about $12.6 billion in the 4 common years of both plans (FYs 1996-99); (3) approximately $27 billion in planned weapon system modernization programs for these 4 years have been eliminated, reduced, or deferred to the year 2000 and beyond; (4) the military personnel, operation and maintenance, and family housing accounts increased by over $21 billion during the common period and continue to increase to 2001 to support Defense's emphasis on readiness and quality-of-life programs; (5) the net effect is a more costly defense program, despite substantial reductions in DOD's weapon modernization programs between 1996 and 1999; (6) Defense plans to compensate for the decline in procurement during the early years of the 1996 FYDP by substantially increasing procurement funding in 2000 and 2001; (7) the Secretary of Defense plans to pay for the increased future modernization with a combination of savings from infrastructure reductions and acquisition reforms and from real budget growth; (8) GAO's analysis shows that the 1996 FYDP does not reflect reduced infrastructure costs, primarily because of funding increases for base operation and management headquarters functions and quality-of-life programs; however, the Concurrent Resolution on the Budget for Fiscal Year 1996 includes over $24 billion more for Defense than requested in the President's budget for FYs 1996-2001; (9) the additional budget amounts are expected, in part, to lessen the need for Defense to reduce or defer weapon modernization programs to meet other near-term readiness requirements; (10) assuming the funds are appropriated, Congress will specify how Defense is to spend some of the added funds; however, DOD may have an opportunity to restore some programs that were reduced or deferred to 2000 and beyond; and (11) the additional near-term funding could mitigate the need for DOD to increase out-year budgets.
FOIA establishes a legal right of access to government information on the basis of the principles of openness and accountability in government. Before FOIA’s enactment in 1966, an individual seeking access to federal records faced the burden of establishing a “need to know” before being granted the right to examine a federal record. FOIA established a “right to know” standard, under which an organization or person could receive access to information held by a federal agency without demonstrating a need or reason. The “right to know” standard shifted the burden of proof from the individual to a government agency and required the agency to provide proper justification when denying a request for access to a record. Any person, defined broadly to include attorneys filing on behalf of an individual, corporations, and organizations, can file a FOIA request. For example, an attorney can request labor-related workers’ compensation files on behalf of his or her client, and a commercial requester, such as a data broker that files a request on behalf of another person, may request a copy of a government contract. In response, an agency is required to provide the relevant record(s) in any readily producible form or format specified by the requester unless the record falls within a permitted exemption. (Appendix II describes the nine specific exemptions that can be applied to withhold, for example, classified, confidential commercial, privileged, privacy, and several types of law enforcement information.) The act also requires agencies to publish a regulation that informs the public about their FOIA process. The regulation is to include, among other things, provisions for expediting requests under certain circumstances. Over the past two decades, various amendments have been enacted and guidance issued to help improve agencies’ processing of FOIA requests. The 1996 e-FOIA amendments, among other things, sought to strengthen the requirement that federal agencies respond to a request in a timely manner and reduce their backlogged requests. In this regard, the amendments made a number of procedural changes, including providing a requester with an opportunity to limit the scope of the request so that it can be processed more quickly and requiring agencies to determine within 20 working days (an increase from the previously established time frame of 10 days) whether a request would be fulfilled. The e-FOIA amendments also authorized agencies to multi-track requests—that is, to process simple and complex requests concurrently on separate tracks to facilitate responding to a relatively simple request more quickly. In addition, the amendments encouraged online, public access to government information by requiring agencies to make specific types of records available in electronic form. In a later effort to reduce agencies’ backlogged FOIA requests, the President issued Executive Order 13392 in December 2005, which set forth a directive for citizen-centered and results-oriented FOIA. In particular, the order directed agencies to provide a requester with courteous and appropriate service and ways to learn about the FOIA process, the status of a request, and the public availability of other agency records. The order also instructed agencies to process requests efficiently, achieve measurable process improvements (including a reduction in the backlog of overdue requests), and reform programs that were not producing the appropriate results. Further, the order directed each agency to designate a senior official as the Chief FOIA Officer. This official is responsible for ensuring agency- wide compliance with the act by monitoring implementation throughout the agency; recommending changes in policies, practices, staffing, and funding; and reviewing and reporting on the agency’s performance in implementing FOIA to agency heads and to the Department of Justice. (These are referred to as Chief FOIA Officer reports.) The Department of Justice, which has overall responsibility for overseeing federal FOIA activities, issued guidance in April 2006 to assist federal agencies in implementing the executive order’s requirements for reviews and improvement plans. The guidance suggested several potential areas for agencies to consider when conducting a review, such as automated tracking of requests, automated processing and receipt of requests, electronic responses to requests, forms of communication with requesters, and systems for handling referrals to other agencies. The OPEN Government Act, which was enacted in 2007, also amended FOIA in several ways. For example, it placed the 2005 executive order’s requirement for agencies to have Chief FOIA Officers in law. It also required agencies to include additional statistics on timeliness in their annual reports. In addition, the act called for agencies to establish a system to track the status of a request. Further, in January 2009, the President issued two memoranda, Transparency and Open Government and Freedom of Information Act, both of which focused on increasing the amount of information made public by the government. In particular, the latter memorandum directed agencies to (1) adopt a presumption in favor of disclosure in all FOIA decisions, (2) take affirmative steps to make information public, and (3) use modern technology to do so. This echoed Congress’s finding, in passing the OPEN Government Act, that FOIA established a “strong presumption in favor of disclosure.” In September 2013, the Department of Justice issued guidance to assist federal agencies in implementing the memoranda and the OPEN Government Act that included procedures for agencies to follow when responding to FOIA requests. Specifically, the guidance discussed how requests are processed—from the point of determining whether an entity in receipt of a request is subject to FOIA, to responding to the review of an agency’s decision regarding a request on an administrative appeal. The guidance also includes procedures on the expedited processing of FOIA requests. Agencies, including DOL, are generally required to respond to a FOIA request within 20 working days. A request may be received in writing or by electronic means. Once received, the request is processed through multiple phases, which include assigning a tracking number, searching for responsive records, processing records, and releasing records. Also, as relevant, agencies respond to administrative appeals and lawsuits filed as a result of their actions and decisions in addressing the FOIA requests. Figure 1 provides a simplified overview of the FOIA process, from the receipt of a request through responding to a lawsuit. As indicated above, during the intake phase of a typical agency FOIA process, a request is to be logged into the agency’s FOIA system and a tracking number assigned. The request is then to be reviewed by FOIA staff to determine its scope and level of complexity. The agency then typically sends a letter or e-mail to the requester acknowledging receipt of the request, with a unique tracking number that the requester can use to check the status of the request, as well as notifying the requester of estimated fees, if any. Next, FOIA staff begin the search to retrieve the responsive records by routing the request to the appropriate program office(s).This step may include searching and reviewing paper and electronic records from multiple locations and program offices. Agency staff then process the responsive records, which includes determining whether a portion or all of any record should be withheld based on statutory exemptions. If a portion or all of any record is the responsibility of another agency, FOIA staff may consult with the other agency or may send (“refer”) the document(s) to that other agency for processing. After processing and redaction, a request is reviewed for errors and to ensure quality. The documents are then released to the requester, either electronically or by mail. FOIA also provides requesters with the right to file an administrative appeal if they disagree with the agency’s decision. After an agency renders a decision and the requester files an administrative appeal, the agency has 20 working days to respond to the requester regarding the appeal. Further, FOIA allows requesters to challenge an agency’s final decision in federal court through a lawsuit. Specifically, if any agency fails to comply with the statutory time limits, including responding to requests within 20 working days, a requester has the right to consider the request denied and sue the agency to compel disclosure. A requester should generally exhaust his or her administrative remedies, such as filing an administrative appeal, before filing a lawsuit. In 2007, FOIA was amended to allow both requesters and agencies to contact the Office of Government Information Services, within the National Archives and Records Administration, to help resolve a dispute at any point in the FOIA process, including after filing an administrative appeal. Mediation also can be used as an alternative to litigation. Established in 1913, DOL has primary responsibility for overseeing the nation’s job training programs and for enforcing a variety of federal labor laws. The department’s mission is to foster, promote, and develop the welfare of wage earners, job seekers, and retirees of the United States; improve working conditions; advance opportunities for profitable employment; and assure work-related benefits and rights. The department administers its various mission responsibilities, including the processing of FOIA requests, through its 23 component offices. These components vary in mission and the types of records that they maintain. Table 1 provides details on each component’s mission and types of records maintained. DOL experienced an increase in the number of FOIA requests received every year from fiscal year 2010 through 2013, a decrease in the number of requests received from fiscal year 2013 through 2014, and an increase in fiscal year 2015. Specifically, the department reported receiving 17,398 requests in fiscal year 2010, and it reported receiving 18,755 requests in fiscal year 2013—a 7 percent increase. In fiscal year 2014, the department reported receiving 16,106 FOIA requests—a 14 percent decrease from fiscal year 2013. However, in fiscal year 2015, the reported number of requests received increased again, to 16,792. Further, the department processed an increased number of requests from fiscal years 2010 through 2012. It then processed a decreased number of requests during fiscal years 2013 and 2014, and an increased number of requests again in fiscal year 2015. Specifically, the department reported processing 17,625 requests in fiscal year 2010, and it reported processing 19,224 requests in fiscal year 2012—an 8 percent increase. While the department processed slightly fewer requests (19,175) in fiscal year 2013, it reported processing 16,715 FOIA requests in fiscal year 2014—a 13 percent decrease compared with fiscal year 2012. However, in fiscal year 2015, the reported number of requests processed increased to 17,104. These numbers of requests received and processed, by the department from fiscal year 2010 through fiscal year 2015 are summarized in table 2. Responsibilities for managing and processing FOIA requests are handled by DOL’s 23 component offices. Within one of these components (the Office of the Solicitor), the Office of Information Services (OIS) serves as the department’s central FOIA office and has agency-wide responsibility for managing the program, to include developing and issuing guidance to implement FOIA initiatives, providing training, and preparing required annual reports. However, DOL has not updated its regulation that is intended to inform the public of the department’s FOIA operations. The component offices manage their own processing and tracking of FOIA requests, relying on an automated system and a process for prioritizing their responses to requests, while appeals and lawsuits are centrally handled. However, while the department’s automated FOIA tracking system meets most statutory requirements, key recommended capabilities to enhance processing have not been implemented. In addition, while the components had provided timely responses to many of the FOIA requests, an estimated 24 percent of the requests were not responded to within the statutory time frame and most components had not documented the rationale for these delays in the automated FOIA tracking system or notified requesters of the delayed responses. Further, most FOIA appeals had not been responded to within the statutory time frame of 20 working days. OIS was established within the Office of the Solicitor in fiscal year 2010, and the Solicitor serves as the Chief FOIA Officer. OIS is responsible for administering the department’s FOIA program, to include coordinating and overseeing the components’ operations, providing training, and preparing the required annual reports on the department’s FOIA performance. In addition, this office has responsibility for processing certain types of requests and assisting in the coordination of requests involving multiple components. In carrying out its duties, the office develops and disseminates guidance on processing requests and implementing elements of the act; it also is responsible for developing the department’s FOIA regulation (discussed in more detail later in the report). For example, in October 2010, the office issued guidance to FOIA disclosure officers and coordinators addressing oversight; roles and responsibilities; and applying exemptions, fees, and fee waivers. In addition, in June 2013, it issued a bulletin to FOIA coordinators regarding steps for addressing requesters’ inquiries about requests and discussing the status of requests. A month later, in July 2013, the office issued a bulletin discussing time limits associated with processing requests, including guidance for time limit extensions during unusual circumstances. Further, in August 2013, OIS issued best practices guidance that provides direction to the department’s staff in responding to requests. The guidance outlines the nine stages of processing a request, as defined by the department, including processing administrative appeals and judicial reviews of litigation filed by requesters. (Appendix III provides additional details on the stages of the department’s process for handling FOIA requests.) Beyond developing and issuing guidance, OIS performs a number of oversight and coordination functions: Holds regular meetings with components. The office holds quarterly meetings with components to discuss their processing of FOIA requests, including plans to reduce backlogs, upcoming training, and best practices. For example, in a January 2015, quarterly meeting with all components’ FOIA coordinators, OIS discussed its plans to conduct administrative reviews covering various areas, such as timeliness and backlog reduction, rules associated with granting and denying requests, and multi-tracking requests; it also discussed the components’ FOIA staffing needs. Prepares and Tracks Processing Metrics. The office also uses data within the department’s FOIA tracking system—the Secretary’s Information Management System for FOIA (SIMS-FOIA)—to provide a quarterly report to departmental leadership and the Department of Justice on the number of requests received, processed, and backlogged, among other reporting requirements. (SIMS-FOIA is discussed in greater detail later in this report.) Conducts reviews to assess components’ actions. OIS reviews components’ FOIA programs to assess their policies, procedures, and compliance with the act. For example, in fiscal year 2015, the office reviewed 14 of 22 components in areas such as FOIA exemptions, timeliness and backlog reduction, agency website and electronic reading rooms, and staffing resources for processing requests. OIS officials stated that they plan to complete reviews of the remaining 8 components by September 2016. The officials added that they assess the information gathered during these reviews to identify areas where the components could benefit from additional training and guidance. Provides training to employees. The department, through OIS, hosts a yearly training conference for employees with FOIA responsibilities. This conference addresses topics such as processing requests, responsibilities for searching for requests, applying exemptions, and assessing search fees. In addition, in February 2012, February and April 2013, and April 2014, the office held a series of targeted training sessions that addressed topics such as FOIA best practices in customer service, and applying specific exemptions. Routes requests to components. OIS routes to the appropriate component(s) those requests sent to DOL via its department-wide e- mail address or that it receives when a requester is unsure which DOL component maintains records that are responsive to a request. The office also addresses those requests sent to the attention of the Solicitor/Chief FOIA Officer. In addition, it coordinates responses to requests that involve multiple components. FOIA requires federal agencies to publish regulations that govern and help inform the public of their FOIA operations. These regulations are to provide guidance on the procedures to be followed in making a request and on specific matters such as fees and expedited processing of requests. Toward this end, in May 2006, DOL, through the Office of the Solicitor, issued a regulation describing steps that individuals are required to follow in making requests, such as submitting a written request directly to the component that maintains the record. The regulations also explained the department’s processing of such requests, including charging fees for the requested records. However, since the issuance of this regulation in 2006, amendments to FOIA and related guidance have led to changes in the department’s processes that are not reflected in the regulation. These changes pertain to the OPEN Government Act of 2007 requirements that federal agencies have a FOIA Public Liaison, who is responsible for assisting in resolving disputes between the requester and the agency. Further, this act required federal agencies to establish a system to provide individualized tracking numbers for requests that will take longer than 10 days to process and establish telephone or Internet service to allow requesters to track the status of their requests. In addition, the President’s FOIA memorandum on transparency and open government and the Attorney General’s FOIA guidelines of 2009 required that agencies take specific actions to ensure that the government is more transparent, participatory, and collaborative. Specifically, agencies are required to rapidly disclose information; increase opportunities for the public to participate in policymaking; and use innovative tools, methods, and systems to cooperate among themselves and across all levels of government. The department has taken actions consistent with these requirements. Specifically, in 2006, DOL implemented its SIMS-FOIA system to track and process requests. Further, the department implemented its FOIA public portal, which links to SIMS-FOIA and allows requesters to track the status of their requests through an Internet service using assigned request tracking numbers. The department also designated a FOIA Public Liaison in December 2007. Additionally, in response to the President’s FOIA memorandum and the Attorney General’s FOIA guidelines of 2009, the department in December 2011 directed all components to ensure transparency when responding to requests by not only disclosing information that the act requires to be disclosed, but also by making discretionary disclosures of information that will not result in foreseeable harm to an interest protected under FOIA. Nevertheless, while it has taken these actions, DOL has not revised its FOIA regulation to inform the public of the role of its public liaison, the department FOIA tracking system, and the availability of the FOIA public portal for tracking the status of requests. In discussing this matter, officials in the Office of the Solicitor stated that updating the regulation is on the department’s regulatory agenda and that, as of March 2016, a draft of the regulation was being circulated for internal review. However, these officials said they had not established a time frame for when the regulation would be finalized. Until the department finalizes an updated regulation reflecting changes in how it processes requests, it will lack an important mechanism for facilitating effective interaction with the public on the handling of FOIA requests. The processing of requests is decentralized among the department’s 23 components, with each component separately administering its own program. In this regard, each component has its own FOIA coordinator and full- and/or part-time staff assigned to process requests; is responsible for its FOIA library; and directly enters information in the department’s FOIA tracking system regarding the processing of its own requests. Similar to the process used across federal government, once a request has been received and assigned to the appropriate component, the component carries out the processing, tracking, and reporting on the request. Most components do so using the department’s central FOIA tracking system. However, the components vary in aspects of their operations. For example, a number of the components are further decentralized, in which requests are assigned to and processed within multiple national, regional, and/or directorate offices (subcomponents) that make up the component. Specifically, once a request is received in a decentralized office, the FOIA processor located in the appropriate subcomponent office is responsible for responding to the request and populating the required information in the department’s central tracking system. Other components are centralized, in which the processing of requests occurs within that one office and does not have to be assigned to a subcomponent office. Components also vary significantly in the number of requests received. For example, in fiscal year 2015, the Occupational Safety and Health Administration received 9,123 requests, while the Office of the Chief Financial Officer received 3 requests. Further, the components rely on varying numbers of full-time employees, as well as part-time and contractor employees, to manage and process the requests. For example, the Employment and Training Administration reported that it had 8 full-time employees and 9 part-time employees in fiscal year 2015. On the other hand, the Employee Benefits Security Administration reported it had no full-time employees and 3 part-time employees in the same year. Collectively, for fiscal year 2015, the 23 components reported having 40 full-time and about 154 full-time or part-time employees assigned to process FOIA requests. Table 3 summarizes the components processing structure, requests received in fiscal year 2015, and the number of employees that manage and process the requests. While requests are separately handled by the components, within the Office of the Solicitor two offices—the Counsel for FOIA Appeals, Federal Records Act, and Paperwork Reduction Act and the Counsel for FOIA and Information Law —individually handle FOIA appeals and FOIA lawsuits. The Counsel for FOIA Appeals, Federal Records Act, and Paperwork Reduction Act is responsible for addressing administrative appeals when requesters disagree with the outcomes of their requests. To assist with tracking the appeals it receives, the office uses an automated system called the Matter Management System. Further, the Counsel for FOIA and Information Law is responsible for providing legal advice on processing requests and defending FOIA litigation. Various FOIA amendments and guidance call for agencies, such as DOL, to use automated systems to improve the processing and management of requests. In particular, the OPEN Government Act of 2007 amended FOIA to require that federal agencies establish a system to provide individualized tracking numbers for requests that will take longer than 10 days to process and establish telephone or Internet service to allow requesters to track the status of their requests. Further, the President’s January 2009 Freedom of Information Act memo instructed agencies to use modern technology to inform citizens about what is known and done by their government. In addition, FOIA processing systems, like all automated information technology systems, are to comply with the requirements of Section 508 of the Rehabilitation Act (as amended). This act requires federal agencies to make their electronic information accessible to people with disabilities. In accordance with the OPEN Government Act, DOL has implemented SIMS-FOIA to assist the department in tracking the requests received and processed. The system assigns unique tracking numbers for each request received, and tracks and measures the timeliness of the requests. Further, staff who process requests are able to include in the system the date the request was received by the first component that may be responsible for processing the request and the date the request was routed to and received by the appropriate component responsible for processing the request. Based on this information, the system then calculates the date by which the response is due to the requester, which is 20 working days from the date the request was received by the office responsible for its processing. In responding to our questionnaire, 22 of 23 components reported using SIMS-FOIA and provided documentation to demonstrate their use of the system to track requests. Due to its independent oversight role within the department, DOL’s Office of Inspector General stated that it does not use this system to track its requests. According to its FOIA Officer, the office has instead created a separate system that is similar to SIMS-FOIA—the Office of Inspector General FOIA Tracking System—to track its requests. Further, in accordance with the act, DOL implemented its FOIA public portal that links to SIMS-FOIA and allows requesters to track the status of their requests through an Internet service using assigned request tracking numbers. Specifically, requesters can access the public portal via the DOL website (http://www.dol.gov/foia) to obtain the status of their requests. The information provided by the portal includes dates on which the agency received the requests and estimated dates on which the agency expects to complete action on the requests. Nevertheless, while the department has taken these actions, it has not ensured that SIMS-FOIA and the online portal are compliant with requirements of Section 508 of the Rehabilitation Act. According to DOL officials in OIS, during a test performed by the department’s Office of Public Affairs, the online portal was determined to have accessibility issues. Specifically, the portal could not be easily accessed by those who were blind or had impaired vision. In addition, the 508 compliance tester in the department’s Office of the Chief Information Officer found that SIMS-FOIA was not accessible to vision impaired employees who need to use the system. With regard to this finding, the Office of the Chief Information Officer determined that, because the system is only used internally by DOL employees, it would fulfill the requirements of Section 508 by providing reasonable accommodations, such as large screen magnifiers and verbal description tools, to those affected employees needing access to the information contained in SIMS-FOIA. According to the Office of the Chief Information Officer, accommodations would be made on a case-by-case basis to address the employee’s specific needs. Further, OIS officials told us that the department is working to make the online portal and SIMS-FOIA compliant with the requirements of Section 508. However, OIS officials could not say by what date compliance with the requirements is expected to be achieved. Having systems that are compliant with Section 508 of the Rehabilitation Act (as amended) is essential to ensure that the department’s electronic information is accessible to all individuals, including those with disabilities. Beyond the requirements provided in law and guidance to develop automated systems to track FOIA requests, three federal agencies have collectively identified capabilities for systems that they consider to be best practices for FOIA processing. Specifically, in conjunction with the Department of Commerce and the Environmental Protection Agency, the National Archives and Records Administration’s Office of Government Information Services identified the following 12 capabilities of an automated system that it considers recommended best practices for FOIA processing: using a single, component-wide system for tracking requests; accepting the request online, either through e-mail or online request multi-tracking requests electronically; routing requests to the responsible office electronically; storing and routing responsive records to the appropriate office electronically; redacting responsive records with appropriate exemptions applied electronically; calculating and recording processing fees electronically; allowing supervisors to review the case file to approve redactions and fee calculations for processing electronically; generating system correspondence, such as an e-mail or letter, with a generating periodic reporting statistics, such as annual report and quarterly backlog data, used to develop reports; and storing and routing correspondence, such as letters or e-mails between agencies and requesters. As of March 2016, DOL had implemented 7 of the 12 recommended best practices for SIMS-FOIA and the FOIA public portal. Specifically, the department had implemented the capabilities of a single tracking system, as well as capabilities for accepting requests through e-mail; multi- tracking requests; routing the request to the office responsible for processing the request; storing and routing correspondence with a requester; and generating periodic report statistics, such as the fiscal year FOIA annual report and quarterly report, that identify requests received, processed, and backlogged. In addition, as mentioned earlier, the department uses a separate FOIA appeals tracking system—the Matter Management System—to track appeals electronically. However, the department had not implemented 4 other recommended capabilities, and had partially implemented 1 capability. Specifically, the department’s automated FOIA tracking system lacked capabilities to store and route responsive records electronically, redact responsive records electronically, and review the case file to approve redactions and fee calculations electronically. Further, the department had not implemented the capability to generate correspondence to a requester. In addition, SIMS-FOIA partially included the recommended capability to calculate and record processing fees electronically. Specifically, the system could record fees electronically, but it could not calculate fees electronically. Figure 2 illustrates the extent to which DOL had implemented recommended capabilities to enhance FOIA processing. In discussing this matter, the OIS director and FOIA officials in the component offices stated that, since the current system generally meets statutory requirements, the department has not yet made improvements to the system to reflect the recommended capabilities. The officials said that they are aware of current system limitations and have begun researching various technologies to incorporate the remaining capabilities. Nevertheless, the officials said that, due to competing interests and resource needs, the department has made a decision to continue using SIMS-FOIA without these capabilities in the meantime. The officials did not provide a time frame for when the capabilities would be implemented. By implementing the additional recommended capabilities, the department has the opportunity to enhance its FOIA processing and, thus, improve the efficiency with which it can respond to information requests. The FOIA statute allows agencies to establish multi-track processing of requests for records based on the amount of work or time (or both) involved in processing requests. Toward this end, DOL’s FOIA regulation, supported by the department’s Best Practices Processing Guide, provides for prioritizing FOIA requests into three processing tracks: simple, complex, and expedited. According to the regulation and guide, although requests are generally required to be handled on a first-in, first-out basis, a component may use two or more processing tracks by distinguishing between simple and more complex requests, based on the amount of work and/or time needed to process the requests. This is intended to allow the component to use its discretion to process simple, more manageable requests quickly, while taking more time to process larger, more complex requests that involve a voluminous amount of records and/or multiple components. Also, according to the regulation and the guide, the component is to determine whether a request should be given expedited treatment and placed in an expedited track ahead of others already pending in the processing queue, whenever it determines that one of the following conditions is met: There is an imminent threat to the life or physical safety of an individual. There is an urgent need to inform the public about an actual or alleged government activity and the requester is someone primarily engaged in the dissemination of information. Failure to disclose the requested records expeditiously will result in substantial loss of due process rights. The records sought relate to matters of widespread news interest that involve possible questions about the government’s integrity. Further, the regulation and guide state that the requester can submit a request for expedited processing at the time of the initial request or at any later time during the processing of the request. The component is to grant the request for expedited processing when the requester explains in detail the basis for the need to expedite the request and demonstrates a compelling need based on the criteria described above. Upon receiving a request for expedited processing, the component is responsible for deciding whether the request is to be expedited and for notifying the requester of the final decision within 10 calendar days. The guide states that, when a request is submitted, the component’s FOIA staff must identify the processing track to which it will be assigned (i.e., simple, complex, or expedited). According to OIS officials, the component FOIA processors assess whether requests are simple or complex based on their experience in handling requests and familiarity with requests that are submitted routinely. Further, according to the guide, FOIA processors assess whether requests are to be given expedited treatment whenever they determine that requests demonstrate a compelling need, such as when they pertain to an imminent threat to the life or physical safety of an individual. All of the components had taken steps that followed the regulation and best practices guidance to prioritize the selected fiscal year 2014 FOIA requests. That is, as the components’ FOIA processors logged in the requests, they assigned them to one of the three processing tracks. With the exception of the Office of the Inspector General, the components used SIMS-FOIA to designate the processing track for the requests. The OIG used its Office of Inspector General FOIA Tracking System to designate the processing track for requests. In addition, 7 of the 23 components provided documentation describing additional component-specific actions that they had taken to help with prioritizing the requests. For example: The Veterans’ Employment and Training Service developed a standard operating procedure that includes steps for multi-track processing. Specifically, this component places its simple requests in its fastest (non-expedited) track, and places its complex requests in its slowest track. The Wage and Hour Division provided its subcomponents with guidance issued in September 2015 that includes criteria for selecting a processing track in SIMS-FOIA. Specifically, according to this guidance, a request will be designated as complex when it requires redactions, involves two or more offices or programs to provide records, includes the review of 100 or more pages, or requires a search and review of over 10 hours, among other things. A request is designated as simple when it requires no redactions; involves 1 office or program; requires a review of 99 pages or less and fewer than 9.5 hours of searching; and when responsive records can be easily located on DOL’s FOIA website. Further, the guidance states that all expedited requests must be approved by the component. The Mine Safety and Health Administration developed standard operating procedures in March 2012 that include instructions for its subcomponents to follow when tracking requests in SIMS-FOIA and requiring its field offices to notify its headquarters office prior to denying an expedited FOIA request. Of the 16,792 requests that the department reported receiving in fiscal year 2015, the components prioritized and processed 7,203 simple requests (about 42 percent of the total requests processed), 9,785 complex requests (about 57 percent), and 108 expedited requests (less than 1 percent). As previously discussed, FOIA requires agencies to make a determination on whether to comply with a request generally within 20 working days of receiving the request and to immediately notify the requester of their determination. Toward this end, agencies are required to route misdirected requests to the internal component or office responsible for processing them within 10 working days of receipt. DOL’s Best Practices guide also recommends that components notify the requester when the need to consult with another component will delay a timely response to the initial FOIA request. SIMS-FOIA provides optional fields allowing components to record the fact that they sent the requester an acknowledgment or other interim response, or add other comments or explanations that caused a delay. Of the 14,745 requests processed by the department between October 1, 2013, and September 30, 2014, components successfully routed an estimated 92 percent of requests to the appropriate component offices within the 10-day time frame, as required by FOIA. However, an estimated 8 percent of requests were not routed to the appropriate offices within 10 days. Further, the components processed an estimated 76 percent of the requests within the statutory time frame of 20 working days. Table 4 shows the overall estimate of timeliness in responding to the population of 14,745 FOIA requests for the department in fiscal year 2014, and appendix IV provides further details on the department’s timeliness in processing FOIA requests from our sample of 258 requests. The department did not respond to the requester within the 20-day time frame for the remaining estimated 24 percent of requests processed, as reflected in the following examples: Although the Office of Information Services routed 2 out of 3 selected requests to the appropriate component (the Office of Assistant Secretary for Policy) within 10 days, one request was initially assigned to the wrong component (the Employment and Training Administration). Once the Office of Assistant Secretary for Policy received the request, it took 315 days to process the request and provide the records to the requester. The Office of Congressional and Intergovernmental Affairs responded to FOIA requesters within the required 20 days for 3 of 10 selected requests. For one request, the component took 154 days (about 5 months) to provide the requester with a response. Specifically, the request was received by the original office and was routed within 10 days, as required by law, to the office responsible for processing the request. However, it took the office that was responsible for processing the request 154 days to provide the response to the requester. The Office of the Secretary responded to FOIA requesters within the required 20 days for 3 out of 10 selected requests. In one instance, the component that was responsible for processing the request took 67 days, or over 2 months, to respond to the requester. Specifically, the department received the request in May 2014 and it was forwarded to the Office of the Secretary on the same day. However, because of the complex nature of the request, the Office of the Secretary had to coordinate with 6 other components, resulting in a delayed response to the requester. For the estimated 8 percent of requests that were not sent to the appropriate office within 10 days and the estimated 24 percent for which there were not timely responses to the requesters, the components did not document the rationales for the delays in SIMS-FOIA or notify the requesters of the delays. Agency officials attributed the delays to multiple components and subcomponents processing various parts of a request, as well as the time required to search for, review, and redact the exempted information from large volumes of records. In addition, DOL’s FOIA Public Liaison explained that SIMS-FOIA only identifies the tracking numbers for the 10 oldest requests, and if the case does not fall within the 10 oldest requests, then the FOIA staff may not be able to find and provide a rationale for the delay. In addition, the liaison noted that there is no DOL requirement for staff to document the rationale for the delays in SIMS-FOIA and to notify requesters regarding delayed responses. However, the system allows the user the option to record the rationale. Without documenting the rationale for the delays and notifying FOIA requesters regarding delayed responses, however, the department lacks the means to ensure that requesters are kept abreast of the status of their FOIA requests. According to DOL’s FOIA regulation, after an agency responds to a request, the requester has the right to file an administrative appeal within 90 days if she or he disagrees with the agency’s decision. Agencies are then required to respond to the requester with a decision regarding the administrative appeal within 20 working days. According to DOL’s 2015 FOIA Annual Report, in fiscal year 2015, the department received 404 appeals and processed 297. In addition, the department reported that it had not responded to appeals within the statutory time frame of 20 working days, thus contributing to 405 backlogged appeals at the department. DOL officials in the office of the Counsel for FOIA Appeals, Federal Records Act, and Paperwork Reduction Act—the office responsible for processing administrative appeals—noted that the high number of unprocessed appeals was due to a substantial increase in the number of incoming FOIA appeals received and a decrease in the number of staff available to process the appeals. Specifically, the number of attorneys available to process FOIA appeals decreased from 3 attorneys to 1 from 2012 through June 2015, while the number of backlogged appeals increased from 139 to 405 within the same period of time. In its technical comments on a draft of this report, DOL stated that it had taken various actions to reduce backlogged appeals. Specifically, it stated that DOL staff from other agency components had been detailed to assist with processing appeals. Further, similar appeals were grouped together to provide a response and staff communicated with requesters about the scope of their appeals or their continued interest in records. In addition, FOIA appeals were assigned to new attorneys in a specialized honors program, and the department hired an additional attorney to address the appeals. Continuing to take such steps to reduce the number of backlogged appeals will be important to help ensure that the department is able to meet its statutory obligation to respond to appeals within 20 working days. Furthermore, by continuing to address its appeals backlog, the department may reduce the likelihood that lawsuits will be filed due to requesters not receiving responses to their administrative appeals (such as discussed later in this report). From January 2005 through December 2014, 68 FOIA-related lawsuits were filed against DOL, primarily as a result of the department either failing to respond to requests or because it withheld certain requested information based on exemptions. Court decisions on these lawsuits were mixed—with rulings being made in favor of the department, both for the department and the requester, and in favor of the requester. In addition, some lawsuits were settled with agreement to release information and/or to award attorney’s fees and court costs to the requester. Among these settlements, courts dismissed the majority of the lawsuits based on terms agreed to by the department and the requester. While Department of Justice guidance issued in July 2010 encourages agencies to notify requesters of available mediation services as an alternative to pursuing litigation, DOL had not taken steps to inform requesters of such services. Doing so could help prevent requesters and the department from being involved in costly litigation and improve the efficiency of FOIA-processing activities. As previously mentioned, FOIA allows a requester to challenge an agency’s final decision in federal court through a lawsuit or to treat an agency’s failure to respond within the statutory time frames as a denial of the request, in order to file a lawsuit. In addition, the act states that the court may assess against the government reasonable attorney’s fees and other litigation costs incurred in a FOIA lawsuit if the requester has obtained relief through either (1) a judicial order, or an enforceable written agreement or consent decree; or (2) a voluntary or unilateral change in position by the agency, if the complainant’s claim is not insubstantial. The 68 lawsuits were brought against DOL because it either did not provide a timely response to a FOIA request, or because the requester disagreed with DOL’s response, usually as a result of the department having withheld records. Of these lawsuits, the court ruled in favor of the department in 18 cases, and jointly in favor of both the department and the requester in 1 case. In addition, among 47 lawsuits, the requesters received relief either as a result of (1) the courts rendering decisions in favor of the requesters (3 lawsuits) or (2) the department and the requesters establishing settlement agreements that awarded attorney’s fees and other costs to the requesters or resulted in the department potentially releasing additional information (44 lawsuits). Two lawsuits were undecided as of April 2016. Table 5 summarizes the outcomes of the 68 lawsuits, and the discussion that follows presents examples of the lawsuits filed and the decisions rendered. Of the 18 lawsuits decided in favor of DOL, 7 were filed because the department did not respond to the initial FOIA request or administrative appeal. The other 11 were due to requesters disagreeing with DOL’s decision not to release information, or the requesters asking for more information than was originally released. In these cases, the department may have applied certain exemptions to withhold documents (see appendix II for the nine specific categories that exempt an agency from disclosing information).The following examples describe these lawsuits: One lawsuit was filed because a requester had asked for records related to being terminated by his employer for refusing to work under alleged unsafe and illegal conditions that violated the Federal Mine Safety and Health Act of 1977. However, the Mine Safety and Health Administration withheld information based on the exemptions related to interagency or intra-agency memorandums that are not available by law to a party other than an agency in litigation with that agency, for law enforcement purposes related to unwarranted invasion of personal privacy, and disclosing the identity of a confidential source. In October 2004, the requester submitted an administrative appeal in response to the decision. Five months after DOL’s acknowledgement of the administrative appeal, the requester had not received a response and thus filed a lawsuit in March 2005. In deciding in favor of DOL, the court upheld the department’s use of the exemptions to withhold information. As the basis for another lawsuit, a request was initially submitted for a Mine Safety and Health Administration investigation report. The department released only a portion of the requested records. In February 2009, the requester filed an administrative appeal. In November 2009, DOL released additional information and withheld information citing the law enforcement privacy and interagency and intra-agency memorandums or letters exemptions. The requester disagreed with the department’s final decision to withhold information and filed a lawsuit in July 2010. The court upheld the department’s decision. One lawsuit involved eight requesters seeking information related to the functionality of an Office of Workers’ Compensation Program computer program that is used to ensure consistent rotation among physicians by specific zip codes. The Office of Workers’ Compensation Program withheld information based on an exemption regarding confidential commercial information and the exemption related to an unwarranted invasion of personal privacy. All eight requesters filed separate administrative appeals from December 2009 through November 2012, to which the department did not respond. As a result, the eight requesters filed a joint lawsuit in July 2013.The court decided in favor of the department’s decision to not disclose the information due to the exemptions cited. In April 2009, records were requested related to the activities and communications between the Secretary and Deputy Secretary of Labor and various labor organizations. In late April 2009, DOL acknowledged receipt of the FOIA request, but decided that it would not provide information. In late September 2009, the requester filed an administrative appeal. Over a month later, in early November, DOL confirmed receipt of the administrative appeal but, again, did not provide information. By late November 2009, the requester had not received any information or requests for a time extension from the department and filed a legal complaint. The court decided in favor of the department, finding that it had properly withheld information on the basis of several exemptions. In Favor of DOL and the Requester One lawsuit—filed because the department withheld certain information and did not respond to the administrative appeal—was ruled both in favor of DOL and in favor of the requester. As a result, additional information was released to the requester. Specifically, in November 2007, a requester asked the Occupational Safety and Health Administration to provide accident investigation information related to a fatal accident. The Occupational Safety and Health Administration responded to the initial request by providing over 100 pages of documentation, but it withheld other documentation, applying the personnel and medical information exemption. In February 2008, the requester filed an administrative appeal. By February 2009, over a year later, DOL had not responded to the administrative appeal and the requester filed a lawsuit. In March 2010, the court decided in part for the department and in part for the requester. DOL was required to disclose portions of witness statements that were previously redacted; however, the court determined that the department could apply an exemption to additional records to withhold specific law enforcement information. In Favor of the Requester For 3 lawsuits, the requesters received relief as a result of the courts rendering decisions in their favor. In March 2005, records were requested from the Occupational Safety and Health Administration regarding the 2003 “Lost Work Day Illness and Injury Rates” for all worksites that had the Standard Industrial Classification code of 80. In prior years, the Occupational Safety and Health Administration had provided these records to requesters for previous reporting periods. However, the Occupational Safety and Health Administration denied this request, stating that releasing the information would interfere with law enforcement proceedings. In May 2005, the requester submitted an administrative appeal. However, by October 2005, the requester had not received a response to the administrative appeal and filed a lawsuit. The court decided in favor of the requester. However, details on the specific relief that the department provided to the requester in response to this ruling were not contained in available court documentation, and DOL officials could not provide such details. DOL appealed the decision, but the court dismissed its appeal. In June 2005, the Occupational Safety and Health Administration received a request for records related to the possible exposure of inspectors and employees to unhealthy/hazardous levels of beryllium. The department denied the request and in August 2005 the requester submitted an administrative appeal. The requester also submitted a second FOIA request for records related to quantifying the airborne and surface concentrations of chemical substances in workplaces where the Occupational Safety and Health Administration’s inspectors obtain samples. After 3 months (by November 2005), the requester had not received a response to the administrative appeal and filed a lawsuit. The court decided in favor of the requester. DOL appealed the decision but the court dismissed the appeal and, according to a FOIA Counsel official, the department then released the information to the requester. A request was submitted to the Mine Safety and Health Administration in September 2007 for records regarding the August 2007 Crandall Canyon Mine collapse that resulted in the death of nine miners and rescuers. In October 2007, the requester offered to accept a partial response to the request as a temporary compromise while awaiting additional responsive documentation. After 3 months, the Mine Safety and Health Administration released more documentation, but withheld certain other documents, citing several exemptions. In May 2008, the requester submitted an administrative appeal in response to the Mine Safety and Health Administration’s use of the exemption for interagency or intra-agency memorandums, and an additional exemption for law enforcement proceedings. In June 2008, a month after the administrative appeal was submitted, DOL acknowledged the appeal, but did not respond by making a determination, as required by FOIA. In November 2008, 6 months after submitting the administrative appeal, the requester still had not received a response and filed a lawsuit. The court decided in favor of the requester and ordered DOL to conduct an additional search for all non-exempt information. The department complied by disclosing additional information. The department and the requester later agreed to dismiss the lawsuit and waive any right to fees. Forty-four lawsuits resulted in settlement agreements between DOL and the requesters. These included 5 lawsuits in which the department agreed to pay certain amounts of the requesters’ attorney’s fees and other costs, and 2 lawsuits in which the department agreed to release additional information to the requesters. For 37 other lawsuits, information on the specific nature and outcomes of the settlements was not available from either DOL’s FOIA official or the related court documentation. For 5 lawsuits that involved settlements, the department agreed to pay to the requesters approximately $97,475 in attorney’s fees, expenses, and costs arising from the lawsuits. These lawsuits are summarized below. A lawsuit was filed as a result of the Wage and Hour Division not responding to an initial FOIA request sent to the department in July 2010. The initial request was submitted for records related to complaints made by undocumented workers under the Wage and Hour Division’s “We Can Help” Program. After 2 months (by September 2010), DOL had not responded to the request and the requester filed a lawsuit. DOL and the requester entered into a settlement agreement resulting in the department agreeing to pay $350 for attorney’s fees and costs arising from the lawsuit. A lawsuit was filed after a requester had, in July 2010, sent 2 requests for information related to the Occupational Safety and Health Administration’s review of its whistleblower protection programs. DOL denied one request due to the exemption to withhold interagency or intra-agency memorandums or letters. In addition, the department did not respond to the second request. In August 2010, the requester submitted two administrative appeals to compel a response. After 2 months, the department had not responded to the administrative appeals and the requester filed a lawsuit in October 2010. DOL and the requester entered into a settlement agreement that dismissed the lawsuit and, according to a FOIA Counsel official, resulted in the department releasing additional responsive records (6,000 pages in full or redacted). In addition, the department agreed to pay the requester $8,250 for attorney’s fees, expenses, and costs arising from the lawsuit. As the basis for another lawsuit, a FOIA requester had sought information from the department’s Office of Labor-Management Standards in December 2009 that related to a union’s trusteeship. Initially, in March 2010, the Office of Labor-Management Standards informed the requester by e-mail that it had compiled 8,500 pages of responsive documents. However, after corresponding with the office until July 2010, the requester filed an administrative appeal letter due to not receiving any information. Subsequently, 10 months after the initial FOIA request (in October 2010), the requester filed a lawsuit as a result of not having received a response to the administrative appeal, any responsive documentation, or an explanation for the delay in providing the documentation. According to a DOL FOIA Counsel official, the department subsequently processed the request and the requester entered into a settlement agreement with DOL resulting in the lawsuit being dismissed, and with the department paying $7,500 for the requester’s attorney fees, expenses, and costs arising from the lawsuit. In February 2013, a FOIA request was submitted for the Wage and Hour Division’s guidance documents regarding “hot goods objections” investigative files. DOL denied the request, citing the exemption related to information for law enforcement purposes. In April 2013, the requester submitted an administrative appeal regarding the previous decision. A month after submitting the administrative appeal, the requester had not received a response from the department and filed a lawsuit in May 2013. The department and the requester then entered into a settlement agreement, resulting in the lawsuit being dismissed and with the department paying $30,000 for attorney’s fees, expenses, and costs arising from the lawsuit. In a lawsuit filed against the department’s Employment and Training Administration in March 2013, the requester had not received any documents after a year of correspondence with DOL. From March 2013 through August 2013, the department provided 217 responsive records in full to the requester, redacted 121 records, and withheld in full 151 records. The court decided in favor of the requester and, according to a FOIA Counsel official, ordered the department to conduct an additional search for responsive records. Nevertheless, the official stated that the requester and the department subsequently reached a settlement agreement, in which the department provided about 900 responsive records and agreed to pay the requester $51,375 for attorney’s fees, expenses, and costs. In addition to the above, 2 lawsuits were settled without the award of attorney’s fees and other costs, but with the department agreeing to release additional information to the requesters. The following summarizes these lawsuits. A lawsuit was filed as a result of a request that was submitted to the Occupational Safety and Health Administration in August 2011 related to a wrongful death lawsuit involving the requester. The Occupational Safety and Health Administration initially provided some information, but withheld other information based on the law enforcement exemption. Subsequently, in November 2011, the requester appealed the decision. In December 2011, DOL responded to the administrative appeal and upheld the Occupational Safety and Health Administration’s decision to withhold information. The requester disagreed with DOL’s response and filed a lawsuit in September 2012. DOL released additional information after the lawsuit was filed. The requester acknowledged receiving the information provided by DOL after the lawsuit was filed and agreed to settle and dismiss the lawsuit. In another case, the requester asked the department’s Office of the Solicitor for records related to a trip taken by the Secretary of Labor, including information on funding, internal memoranda and communications, and travel and security costs. Nine months after the request, the requester had not received any response from DOL and filed a lawsuit in March 2013. DOL provided information to the requester after the lawsuit was filed. In September 2013, after reviewing DOL’s documentation, the requester agreed with the court’s decision to settle and dismiss the case. The court dismissed the case in January 2014. As previously noted, details on the results of the 37 other lawsuits were lacking. In particular, the department’s Counsel for FOIA and Information Law could not provide details on what, if any, information and/or other relief were provided as part of the settlements. Further, the available court documentation did not include information on whether or not information was released to the requester; rather, the available documentation simply noted that the cases were dismissed. As of April 2016, courts had not rendered decisions on 2 of the 68 lawsuits. A lawsuit was filed pertaining to an expedited request that was submitted to the department’s Office of the Assistant Secretary for Administration and Management in July 2013 for records related to the use of alias e-mail addresses for DOL political appointees and a request to search personal e-mails of senior DOL officials for evidence of the use of personal e-mail to conduct official business. After 2 months, the requester had not received any response from the department and filed a lawsuit in September 2013. However, as of April 2016, the case was still pending. Lastly, a lawsuit was filed as a result of a FOIA request that was submitted in December 2013 related to the requester’s Federal Mine Safety and Health Act anti-retaliation complaint investigation file. In January 2014, the department sent an acknowledgement letter stating that, due to “unusual circumstances surrounding the records” being sought, it would take about 90 working days to fulfill the request and, therefore, the statutory time limits for processing the request could not be met. According to officials from the Mine Safety and Health Administration, the requester was provided the opportunity to modify the scope of the request so that it could be processed within the statutory time limits, but did not respond to this offer. By April 2014 (4 months after the request was submitted), the requester had not received any information and filed a legal complaint. In May 2014, DOL provided certain responsive information, but withheld other records due to several exemptions. In December 2014, pursuant to a change in the Mine Safety and Health Administration’s FOIA policy, the department sent additional responsive information after reviewing the earlier response and determining that additional information were releasable. In July 2015, the court asked for the investigation file related to the requester’s Mine Safety and Health Administration complaint, and as of April 2016 the case was still pending. The OPEN Government Act of 2007 established the Office of Government Information Services within the National Archives and Records Administration to oversee and assist agencies in implementing FOIA. Among its responsibilities, the office offers mediation services to resolve disputes between requesters and federal agencies as an alternative to litigation. Office of Government Information Services was required to offer mediation services to resolve disputes between FOIA requesters and agencies. According to Department of Justice guidance issued in July 2010, agencies should include in their final agency responses to requesters a standard paragraph notifying the requesters of the mediation services and providing contact information for the Office of Government Information Services. The guidance states that this notification should be provided at the conclusion of the administrative process within the agency (i.e., as part of the agency’s final response on the administrative appeal). This is intended to allow requesters to first exhaust their administrative remedies within the agency. The guidance also states that agencies should provide requesters with notification of their right to seek judicial review. Since the issuance of the guidance in July 2010, none of the 12 FOIA lawsuits that we reviewed involving administrative appeals had corresponding response letters that included language notifying requesters of the Office of Government Information Services’ mediation services. Moreover, the department had not issued guidance to its components on including such language in the letters. Thus, requesters may have been unaware of the mediation services offered by the office as an alternative to deciding to litigate their FOIA case. Officials representing the department’s Counsel for FOIA Appeals acknowledged that steps had not been taken to ensure that the language would be included in response letters. They stated that the Counsel for FOIA Appeals planned to consult with the Department of Justice’s Office of Information Policy on how to incorporate the language and on how the department should develop procedures for working with the Office of Government Information Services to mediate disputes with FOIA requesters. However, they did not identify a time frame for doing so. Until it incorporates notification of the Office of Government Information Services’ mediation services in final response letters, DOL may be missing opportunities to resolve FOIA disputes through mediation and, thereby, reduce the lawsuits filed. Accordingly, the department may miss the opportunity to save time and money associated with its FOIA operations. DOL has implemented a process to manage, prioritize, and respond to its FOIA requests. However, opportunities exist to improve its process. Specifically, because it has not updated its FOIA regulation to reflect recent changes to its process, the department may be hindering the public’s use of that process. Also, while the department uses a system to track its FOIA requests and a portal to allow requesters to track the status of requests online, the system and portal lack certain required and recommended capabilities that could enhance the management and processing of requests. Absent capabilities consistent with Section 508 of the Rehabilitation Act (as amended), the department is not implementing the federal requirement to make its electronic information accessible to people with disabilities. In addition, by implementing recommended capabilities, the department could be better positioned to ensure the efficiency of its FOIA processing efforts. Further, although the department responded to the majority of its fiscal year 2014 FOIA requests within the time frame mandated by law, it has not consistently documented the reasons for delays in its automated FOIA tracking system or notified requesters about them. A majority of lawsuits brought against the department from January 2005 through December 2014 either resulted from the department failing to respond to requests or because it withheld certain information pertaining to requests. By ensuring that requesters are made aware of mediation services offered by the National Archives and Records Administration’s Office of Government Information Services as an alternative to litigation, DOL may be able to avoid future lawsuits, thus saving resources. To improve DOL’s management of FOIA requests, we recommend that the Secretary of Labor direct the Chief FOIA Officer to take the following five actions: Establish a time frame for finalizing and then issue an updated FOIA regulation. Establish a time frame for implementing, and take actions to implement, section 508 requirements in the department’s FOIA system and online portal. Establish a time frame for implementing, and take actions to fully implement, recommended best practice capabilities for enhanced processing of requests in the department’s FOIA system and online portal. Require components to document in SIMS-FOIA the rationales for delays in responding to FOIA requests, and to notify requesters of the delayed responses when processing requests. Establish a time frame for consulting with the Department of Justice’s Office of Information Policy on including language in DOL’s response letters to administrative appeals notifying requesters of the National Archives and Records Administration’s Office of Government Information Services’ mediation services as an alternative to litigation, and then ensure that the department includes the language in the letters. We received written comments on a draft of this report from DOL and the National Archives and Records Administration. In DOL’s comments, signed by the Chief FOIA Officer (and reprinted in appendix V), the department concurred with all five recommendations and agreed that it can improve the management of its FOIA program. The department identified various actions that it had taken or planned to address the recommendations. For example, concerning our recommendation to update its FOIA regulation, the department stated that it has drafted a Notice of Proposed Rulemaking to update the regulation and expects to publish the final regulation by the end of 2016. In addition, relevant to our recommendation, the department stated that it is taking actions to review and modify the text and formatting of its public FOIA portal to comply with the provisions of Section 508 of the Rehabilitation Act, and expects to have changes in place by September 2016. With regard to its automated FOIA tracking system, the department stated that the Office of the Assistant Secretary for Administration and Management determined that it can fulfill the requirements of Section 508 by providing individualized accommodations to any DOL FOIA staff with vision or other accommodation needs who require access to the system. For example, the department noted that large screen magnifiers and verbal description tools can be provided to staff that require such accommodations. The department added that it would continue to ensure that Section 508 compliance is included as a necessary element in planning for any future SIMS-FOIA replacement or successor system. Further, with regard to implementing recommended best practices, the department stated that it continues to monitor proposed FOIA legislation and Department of Justice guidance so that it can assess the feasibility and business need for investment in technology changes. The department said that it will ensure that the recommended best practice capabilities for enhanced FOIA processing are considered as a part of its planning for any future SIMS-FOIA replacement or successor system. Regarding our recommendation to document the rationale for delays in responding to FOIA requests, the department stated that its Office of Information Services plans to issue implementing guidance to address this matter. The Office of Information Services also is to provide follow-up training on the guidance after it is finalized and disseminated, a step which it expects to have completed by the end of September 2016. Lastly, in response to our recommendation on notifying requesters of the mediation services offered by the Office of Government Information Services, the department stated that it consulted with the Department of Justice Office of Information Policy on March 7, 2016 and, as of March 31, 2016, had begun including language in its final appeal decisions informing requesters of the mediation services. If the department follows through to ensure effective implementation of its actions on our five recommendations, it should be better positioned to improve and successfully carry out the management of its FOIA program. Beyond DOL, in comments signed by the Archivist of the United States, the National Archives and Records Administration expressed appreciation for our review and for our recognizing the importance of DOL including notification of the Office of Government Information Services’ mediation services in its final appeal letters. The National Archives and Records Administration’s comments are reprinted in appendix VI. In addition to the aforementioned written comments, we received technical comments via e-mail from the Director of the Office of Information Services at DOL, and the Audit Liaisons from the Department of Justice and the National Archives and Records Administration. We have incorporated these comments, as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees, the Secretary of Labor, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-6304 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VII. Our objectives were to determine (1) how the Department of Labor (DOL) and its components manage and process Freedom of Information Act (FOIA) requests, including how they prioritize requests, and the extent that responses to requests have been timely; and (2) how many lawsuits DOL has been subject to arising from FOIA requests, and the results of those lawsuits. To address the study objectives, we collected and analyzed published statistics from the department’s fiscal year 2014 and fiscal year 2015 FOIA annual reports, and other documentation from the department’s central FOIA office, Office of Information Services, such as the October 2010 Desk Reference Guide and August 2013 Best Practices Guide. To determine the responsibilities of the central FOIA office and the components in managing and processing requests, we reviewed organization charts; the department’s policies and procedures; and information discussed in the FOIA annual reports, the DOL Chief FOIA Officer reports, and other agency documentation. We also conducted interviews with responsible officials in the Office of the Solicitor and the department’s 23 component offices. To facilitate our understanding of how the central FOIA office and the components manage, prioritize, and process FOIA requests, we developed and administered a questionnaire to the 23 components in June 2015. We received responses from all of the components. To ensure that our questions were clear and logical and that respondents could answer the questions without undue burden, we provided the draft questionnaire to the department’s Office of Information Services and obtained and incorporated the office’s comments on the questionnaire in advance of sending it to the 23 components. The practical difficulties of conducting any questionnaire may introduce errors, commonly referred to as non-sampling errors. For example, difficulties in how a particular question is interpreted, in the sources of information available to respondents, or in how the data were entered into a database or were analyzed can introduce unwanted variability into the results. With this questionnaire, we took a number of steps to minimize these errors. For example, our questionnaire was developed in collaboration with a GAO methodologist. The results of our questionnaire were summarized to describe component efforts to manage, prioritize, and process FOIA requests. We also reviewed criteria used by the central office and components to prioritize requests, and assessed current procedures and practices against the criteria. Further, we reviewed available statistics on FOIA processing timeliness. The scope of our work focused on the department’s central tracking system and public portal, but did not include examining other systems, such as those used by the Inspector General and for separate FOIA appeals tracking. To determine to what extent the responses to FOIA requests have been timely, DOL provided a list of 14,745 requests that had been received in the department as of October 1, 2013, and that had been fully processed by September 30, 2014. Of that total, we randomly selected a representative sample of 258 requests. In order to make the random selection, we first sorted the data we obtained based on component, process track, and whether the request was delayed or on time. We grouped the requests by their individual process track. For example, each closed request has a process track of simple, complex, or expedited. To ensure that all components were included in the sample, we divided the 23 components in the sample frame into two strata: components with 10 or more requests (stratum 1) and components with fewer than 10 requests (stratum 2). We selected all cases per component in stratum 2. We chose 96 requests from stratum 1 and allocated those 96 requests according to the number of requests. We also wanted to ensure a minimum sample of 10 requests per component in stratum 1. This resulted in a total sample of 258 requests, as reflected in table 6. This methodology was used to yield a pre-determined precision level on population estimates while minimizing the sample size or the cost. In addition, it allowed us to measure whether the 258 requests were processed in a timely manner. Because we followed a random selection of the sample, we are able to make projections to the population. The results of our sample are generalizable to the population of FOIA requests processed by the department as of September 30, 2014. All percentage estimates from our sample have margins of error at the 95 percent confidence level of plus or minus 15 percentage points or less, unless otherwise noted. To estimate the population percentage of requests that were responded to within 20 days, the sample data are weighted to make them representative of the population. The weights are developed at the stratum level. Our random sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval of plus or minus 15 percentage points or less, unless otherwise noted. This is the interval that would contain the actual population value for 95 percent of the samples that we could have drawn. To assess the reliability of the data we received, we supplemented our analysis with interviews of FOIA officials in the department’s Office of the Solicitor and Office of Information Services, as well as component officials, regarding their responsibilities and management practices. These officials included the DOL FOIA Public Liaison, the Director of the Office of Information Services, and the Director of the Office of the Solicitor. We also electronically tested the data and found them to be sufficiently reliable for purposes of our reporting objectives. To determine the number of delayed requests, we compared the date of receipt, date assigned to agency, FOIA start date, and response date field of each resolved request in the Secretary’s Information Management System for FOIA (SIMS-FOIA). To determine whether the request was forwarded to the appropriate office for processing within the statutory time frame of 10 working days, we reviewed the “date assigned to agency” field in SIMS-FOIA against the “date of receipt in DOL” field. To determine whether the department responded to the requester within 20 working days, we reviewed the “date assigned – FOIA start date” field and compared it to the latest “response date” field. In addition, we reviewed other available documentation, including SIMS-FOIA snapshots and all documentation associated with each of the 258 resolved requests. We also reviewed data from SIMS-FOIA to identify any documentation of delays in processing the requests, reasons for the delays, and any actions taken by the department to notify requesters of delays. To determine the number of FOIA lawsuits filed against the department, we reviewed relevant information that spanned portions of the prior and current administrations (January 2005 through December 2014). We obtained information on the lawsuits from DOL and the Department of Justice, and through the Public Access to Court Electronic Records (PACER) system. Specifically, we reviewed DOL documentation that discussed its FOIA litigation, settlements, and legal decisions made from January 2005 through December 2014. In addition, we reviewed Department of Justice documentation that included a listing of DOL’s FOIA litigation cases, attorney costs and fees assessed by the courts, and court decisions made from January 2005 through December 2014. We reviewed and analyzed the documentation to confirm that all lawsuits were FOIA-related and included the department or a component as a defendant. We also reviewed the documentation to determine the reason the lawsuit was brought to litigation and the resulting court decision. Further, we interviewed agency officials from the Department of Justice’s Civil Division and Office of Information Policy, as well as from DOL’s Management and Administrative Legal Services Division, to discuss their processes regarding litigating FOIA cases and the results of the cases. In selected cases, DOL did not have complete information associated with the lawsuit, such as the decision, complaint, and/or settlement documentation. We conducted this performance audit from February 2015 to June 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The Freedom of Information Act (FOIA) prescribes nine specific categories of information that are exempt from disclosure, which are described in table 7. In August 2013, the Department of Labor’s Office of the Information Services (OIS) issued guidance that is intended to assist the department’s staff in responding to FOIA requests. The guidance outlines the nine stages of processing a request received by the department, including the processing of administrative appeals and judicial reviews of lawsuits filed by requesters, as depicted in figure 3 below. The nine stages that components are to take are as described: Receive and route a request. A FOIA request can be submitted to the department by a member of the public via postal mail, e-mail, or fax. If OIS receives a request, its staff are to log the request into the Secretary’s Information Management System for FOIA (SIMS-FOIA) and route the request to the appropriate DOL component office, or retain the request if it pertains to the Office of the Solicitor. When the appropriate component receives the request in SIMS-FOIA, the clock for processing the request (i.e., 20 working days) starts. If the request is misrouted, it is to be returned to OIS for reassignment to the appropriate component. Once assigned to the appropriate component, the FOIA processor receiving the request is to send an acknowledgement letter with a unique tracking number to the requester. Evaluate the request. The FOIA processor is to obtain from the requester a description of the records sought and determine if there is enough information to locate the responsive records. The processor is then to confirm in writing (or by e-mail) any narrowing in scope of the request or alternative time frames (i.e., beyond the 20 working days due to unusual circumstances) for processing the requests. Prioritize requests and time limit. When processing a request, components are to use one of three processing tracks (simple, complex, or expedited) and are to identify in SIMS-FOIA which track is being used to process the request. The decision is to be made based on the amount of work and/or the amount of time needed to process the request. Conduct a reasonable search. The FOIA processor is to conduct a search for responsive records. To do so, the processor can consult with subject matter experts to identify the type of responsive records that exist and the location of the records. Components are to maintain documentation concerning the search methodology, including the offices where the records were searched, the individual(s) that conducted the search, and what search terms were used. Review, segregate, and release non-exempt information. The FOIA processor is to review the responsive records, and determine whether a portion or all of any record should be withheld based on a statutory exemption. If a portion or all of any record is the responsibility of another agency or component, FOIA processor can consult with the other agency or component or send the responsive records to that other agency or component for processing. Assess fees. The component is to assess the fees that will be charged by the type of requester that is making the request. For example, if the requester is a member of the news media, then he/she may request a fee waiver. Further, the types of fees charged depend on time spent to search for responsive records, document the review of records, and the copies of the records. Respond to the requester. When responding to the requester, the FOIA processor is to make a determination to release a response in full, to apply an exemption and withhold information protected under an exemption and release certain parts of the response, or fully deny the request. A response must be in writing and signed by a FOIA disclosure officer. Response letters must include language regarding the identification of responsive records; the page count of records processed; the amount of information or pages withheld, if applicable; the identification of any exemptions asserted; any procedural denials that apply; and the requester’s right to file an administrative appeal. Process administrative appeal. A requester has the right to administratively appeal any adverse determination a component makes concerning a request. The Office of the Solicitor, which serves as the designated appeals official, is to notify the requester in writing when the appeal is received, and review the component’s actions taken in response to the FOIA request to determine whether corrective steps are necessary. The Office of the Solicitor is to then issue a final appeal determination and notify the requester of the right to seek judicial review. Conduct judicial review of processing. FOIA provides requesters with the right to challenge an agency’s final decision in federal court. Components have the burden of proof and must demonstrate to the court all actions taken in response to a request, or that appeal determinations are appropriate and consistent with the statute and the department’s FOIA regulations. Components are to provide the department’s FOIA Counsel with the case file, including the responsive record, any exemptions applied, and a response letter, as well as an appeal determination, if applicable. The FOIA case file must be preserved and processors must be prepared to justify their actions in the event of litigation. Using our sample of 258 Freedom of Information Act requests, we reviewed the Department of Labor’s timeliness in processing the requests. Specifically, for each of the department’s 23 components, table 8 shows the number of requests in our sample of each type (simple, complex, or expedited); the number of days it took for those requests to be routed to the correct office; and the number of days it took for the components to respond to the requests. In addition to the contact named above, key contributors to this report were Anjalique Lawrence (Assistant Director), Freda Paintsil (Analyst in Charge), Christopher Businsky, Quintin Dorsey, Rebecca Eyler, Andrea Harvey, Ashfaq Huda, Kendrick Johnson, Lee McCracken, Dae Park, David Plocher, Umesh Thakkar, Walter Vance, and Robert Williams, Jr.
FOIA requires federal agencies to provide the public with access to government information in accordance with principles of openness and accountability and generally requires agencies to respond to requests for information within 20 working days. When an agency does not respond or a requester disagrees with the outcomes of his or her request, the requester can appeal a decision or file a lawsuit against the agency. Like other agencies, DOL responds to thousands of FOIA requests each year. In fiscal year 2015, the department received approximately 16,800 requests. GAO was asked to review DOL's FOIA processing. GAO's objectives were to determine (1) how the department and its components manage and process FOIA requests, including how they prioritize requests, and the extent that responses to requests have been timely; and (2) how many lawsuits DOL has been subjected to arising from FOIA requests, and the results of those lawsuits. To do so, GAO reviewed DOL reports, policies, guidance, and other documentation; analyzed a random sample of FOIA requests processed by the department in fiscal year 2014; reviewed FOIA-related legal records; and interviewed officials. Responsibilities for managing and processing Freedom of Information Act (FOIA) requests are handled by the Department of Labor's (DOL) 23 component offices. Within one of these components, the Office of Information Services (OIS) functions as the department's central FOIA office and has agency-wide responsibility for managing the program; however, the department has not updated its FOIA regulation to reflect changes in its process made in response to more recent amendments to the law and new implementing guidance. DOL uses an information technology (IT) system to manage and track requests, but it has not implemented key required and recommended capabilities for enhancing FOIA processing, such as capabilities to accommodate individuals with disabilities or electronic redaction. Implementing the required and recommended capabilities could improve the efficiency of the department's FOIA processing. DOL and its components have implemented a process for prioritizing FOIA requests, allowing for expedited processing in certain cases, and in fiscal year 2014 the department processed an estimated 76 percent of requests that GAO reviewed within 20 working days. For the estimated 24 percent of cases that were not timely, officials attributed these delays, in part, to the involvement of multiple components in a single request or the time required to process large volumes of requested records. However, the department did not document the rationales for delays in its FOIA tracking system or notify requesters of them. Further, the department had not responded to administrative appeals within the statutory time frame of 20 working days, but is taking steps to reduce the backlog of appeals. From January 2005 through December 2014, 68 FOIA-related lawsuits were brought against DOL. Of these lawsuits, the court ruled in favor of the department in 18 cases, jointly in favor of both the department and the requester in 1 case, and in favor of the requesters in 3 cases. In 44 of the remaining lawsuits, the department and the requesters established settlement agreements that awarded attorney's fees and other costs to the requesters or resulted in the department potentially releasing additional information. A decision on 2 lawsuits was undecided as of April 2016 (see figure). Although recommended by Department of Justice guidance, the department did not notify requesters of mediation services offered by the Office of Government Information Services as an alternative to litigation. By doing so, DOL may be able to avoid future lawsuits, thus saving resources and ensuring that requesters are kept informed about the department's FOIA process. GAO is recommending, among other things, that DOL establish a time frame to finalize and issue its updated FOIA regulation and take actions to implement required and recommended system capabilities. In written comments on a draft of the report, the department agreed with the recommendations.
TRIA requires private insurers to offer terrorism coverage in commercial property and casualty insurance, including workers’ compensation insurance policies. Insurers must make terrorism coverage available to their policyholders on the same terms and conditions, including coverage levels, as other types of insurance coverage. For example, an insurer offering $100 million in commercial property coverage must offer $100 million in coverage for property damage from a certified terrorist attack. However, insurers could impose an additional charge for the coverage and policyholders, except in workers’ compensation policies, generally have the option of not purchasing it. Under TRIA, the federal government is to reimburse insurers for a portion of their losses from certified terrorist acts. Specifically, the federal government would reimburse insurers for 85 percent of their losses after the insurers pay a deductible amounting to 20 percent of the previous year’s direct earned premiums. The federal funding is activated when aggregate industry losses exceed $100 million and is capped at an annual amount of $100 billion. Originally enacted as a 3-year program, Congress has reauthorized the program twice and recently extended it until 2014. In December 2005, Congress passed the Terrorism Risk Insurance Extension Act that increased the required amount insurers would have to pay in the aftermath of a terrorist attack. In December 2007, Congress approved the Terrorism Risk Insurance Program Reauthorization Act and eliminated the distinction between terrorist acts carried out by foreign and domestic actors. It also clarified language on insurers’ liability, stating that insurers are not responsible for losses that exceed the federal government’s annual liability cap of $100 billion. Commercial property insurance policies can be simple or complex, depending on the value and location of the properties being insured. Property owners may insure properties individually or consolidate multiple properties in a portfolio and insure them with a single policy. The benefits of grouping properties include spreading the cost (premium) across more than one building on the premise that all buildings in a portfolio are unlikely to be damaged by the same peril in the same event. Policies with high insured values can require multiple insurers to provide coverage, with each providing a portion of the coverage up to the full amount of the policy, because the total insured value is too great for any one insurer to absorb (see fig. 1). According to a representative of a large brokerage firm, policyholders typically buy property coverage, including terrorism coverage, through one all-risk policy, which insures losses from multiple perils. Policyholders generally do not purchase terrorism insurance in amounts that would cover the total replacement value of the insured property, but rather purchase insurance in amounts that reflect the maximum amount of foreseeable losses that could occur in a terrorist attack. Also, policyholders may determine the amount of terrorism coverage to purchase based on amounts required by a lender providing the mortgage on the property. States have primary responsibility for regulating the insurance industry in the United States, and state insurance regulators coordinate their activities in part through the NAIC. The degree of oversight of insurance varies by state and insurance type. In some lines of insurance, insurers may file insurance policy forms with state regulators that help determine the extent of coverage provided by a policy by approving the wording of policies, including the explicit exclusions of some perils. According to a NAIC representative, while practices vary by state, state regulators generally regulate prices for personal lines of insurance and workers’ compensation policies but not for commercial property/casualty policies. In most cases, state insurance regulators perform neither rate nor form review for large commercial property/casualty insurance contracts because it is presumed that businesses have a better understanding of insurance contracts and pricing than the average personal-lines consumer. Reinsurers generally are not required to get state regulatory approval for the terms of coverage or the prices they charge. According to a variety of sources, commercial property terrorism insurance currently appears to be widely available on a nationwide basis at rates viewed as reasonable, largely due to the TRIA program and the current “soft” insurance market. However, some policyholders in urban areas viewed as being at higher risk of a terrorist attack, particularly in Manhattan and to a lesser extent in some other high-risk cities such as Chicago and San Francisco, may be forced to take additional steps to overcome challenges they may have initially faced in obtaining desired amounts of coverage at prices viewed as reasonable. Policyholders generally have been able to obtain desired or required amounts of terrorism coverage by increasing the number of carriers in what already may be large and complex insurance programs, adding to what can be a time-consuming and complicated process for policyholders and their insurance brokers. Others secure needed coverage by purchasing all or a portion of their terrorism coverage in a separate insurance policy, or self- insuring through a captive insurance company. According to data compiled by two large insurance brokers, a majority of their commercial clients nationwide purchase terrorism insurance coverage, and the premium rates for such coverage generally have been stable in recent years. As shown in figure 2, one of these brokers reported that approximately 60 percent of its clients purchased some form of terrorism coverage each year from 2005 through 2007. Another large insurance broker reported that take-up rates for its large property clients have remained between 60 percent and 65 percent since 2004. According to a large broker, the Northeast, which includes New York City, has the largest percentage of companies that purchase terrorism coverage for properties, with about 70 percent having purchased it in 2007. Real estate companies account for the largest percentage of clients that purchased terrorism insurance coverage, with more than 80 percent of these clients having done so in 2007. Manufacturing and construction companies had the lowest purchase rates, with 45 percent and 34 percent, respectively, having purchased coverage in 2007. Data collected by one of these large brokers also show that the premiums that their clients paid for terrorism coverage, expressed as a percentage of the commercial property premiums, generally have been stable at around 4 percent since 2003 (fig. 2). Another large broker also reported that premiums have been stable at around 4 percent, on average, since 2006. An official from one of these brokerages told us steady purchase rates between 2005 and 2007 may indicate that policyholders who want to purchase terrorism coverage have been able to purchase it. According to representatives from these two brokers, the primary reason why approximately 40 percent of clients did not purchase terrorism coverage is that they may not have perceived themselves at risk of a terrorist attack, particularly those in nonurban areas or those in industries perceived to be at lower risk of attack, such as manufacturing. Other reasons clients may not have purchased coverage include the absence of lender requirements or the cost of coverage, according to one large broker. Information we collected in a range of interviews with policyholders, national and regional brokers, insurers, and others was consistent with the view that terrorism insurance coverage is available nationwide at premium rates viewed as reasonable. Several policyholders we contacted that own large and small portfolios of real estate throughout the United States, including national hotel chains, sports stadiums, office towers, shopping malls, and residential buildings, told us they could obtain as much terrorism coverage as they sought to obtain. Some policyholders and regional brokers also said that terrorism insurance premiums continue to decline while the quality of coverage improves. For example, a representative from a commercial real estate company that owns large office towers, a luxury resort, and an industrial property in major U.S. cities said the company recently increased its terrorism coverage by more than 50 percent and decreased its premium by more than 20 percent. In at least one state, an insurer and state regulator told us terrorism coverage may be provided at no additional cost to policyholders, especially those with properties perceived to be at low risk of a terrorist attack. Insurers, policyholders, and other industry participants cited the TRIA program and the current soft, or competitive, market as the key reasons that terrorism coverage generally has been available nationwide. Without the federal backstop for potential insurance losses related to terrorism, industry participants said that coverage availability could decline substantially. For example, some insurers told us the amount of terrorism coverage they provide would decline—by more than 95 percent for one insurer—without the TRIA provision that provides reimbursement for insured losses that exceed the amount of an insurer’s TRIA deductible. In a soft market, insurance is widely available and sold at a lower cost, making it easier for buyers to obtain insurance. According to insurance industry participants, recent strong profits, increases in investment income, and a lack of large losses from major catastrophes have contributed to insurers’ ability to increase their capital levels in recent years. According to some brokers, high levels of capital have increased insurers’ capacity and willingness to provide terrorism insurance coverage. However, some interviewees cautioned that another terrorist attack or “hardening” of the general terrorism insurance market could reduce the current supply of terrorism insurance coverage and increase pricing. In the past, insurers frequently have responded to catastrophic events by cutting back coverage significantly or substantially increasing premiums for policyholders. For example, such reactions took place in the Florida market after Hurricane Andrew in 1992, in California after the Northridge earthquake of 1994, and more widely following the September 11 attacks. A broker with a large national firm told us that the insurance industry has remained highly sensitive to the potential financial consequences of another terrorist attack since September 11. According to one industry analyst, even a modest terrorist attack in the future could cause significant fear and concern in the market and lead to increases in prices and restrictions on availability. Moreover, some industry analysts said that insurers could suffer significant losses for a variety of other reasons, such as the costs of a large hurricane or earthquake or declines in the values of their investment portfolios, which might make them less willing to offer terrorism coverage under current terms and pricing. While terrorism insurance coverage generally is available nationwide, many industry participants reported that some policyholders in major cities viewed as being at higher risk of terrorist attack, particularly in Manhattan, may initially experience challenges in obtaining desired amounts of coverage. Specifically, industry participants said that owners of large, high-value properties in financial districts or downtown locations, or near government offices or transit hubs, may face initial challenges in obtaining coverage in their all-risk property policies. For example, a policyholder with large office and retail properties in New York, San Francisco, and Chicago told us only a few insurers were willing to offer it coverage that it considered expensive and that provided only half of the $1.5 billion in coverage sought. In spite of these initial challenges, this policyholder was able to obtain the needed coverage by taking other approaches that will be discussed later in this report. Brokers and policyholders mentioned these difficulties have been more severe in certain locations in Manhattan than anywhere else. In particular, they said the area surrounding Times Square—or midtown—and lower Manhattan, which contained the World Trade Center, presents difficulties because of the dense concentration of buildings, perceived risk of a future terrorist attack, and the overlapping insurance needs of building owners and tenants. For example, one broker active in the New York market told us of an approximately 15-block stretch of midtown Manhattan with a high concentration of property values in which each property is valued at $1 billion or more, creating strong demand by building owners for limited and expensive coverage. Another broker told us the availability of terrorism coverage is most constrained in the area surrounding the World Trade Center site in lower Manhattan. The brokers said retail clients that would like to establish themselves in this area worry about not enough coverage being available for terrorism, flood, and fire damage. Representatives from large national brokers, as well as insurance companies and other industry participants, said that certain policyholders in Chicago and San Francisco also may face initial challenges in obtaining terrorism insurance coverage, although to a lesser extent than in Manhattan. As is the case in Manhattan, these policyholders typically own large buildings in proximity to other buildings and generally are located in financial districts or downtown locations. While owners of large buildings in such locations may face challenges in obtaining coverage, a broker told us that even a small building might be difficult to insure for terrorism risk if it were located near larger properties in high-risk areas. Many industry participants reported that premiums were higher in cities considered to face greater financial risks from the likelihood of terrorist attacks occurring there, adding to the challenge of obtaining terrorism coverage. For example, according to one large insurance broker, terrorism insurance premiums in New York City can be twice as high as prices for similar buildings in other cities considered to be at high risk of a terrorist attack, and more than five times higher than prices in lower-risk cities. The premium amount dedicated to insuring properties in certain locations against terrorism risks may, on a relative basis, significantly exceed the amount necessary to cover such risks in other geographic areas. For example, a broker in the San Francisco Bay area told us average terrorism pricing for owners of certain buildings there can be from 20 to 30 percent of the all-risk property premium, whereas the national median was around 4 percent in 2007. While some policyholders in high-risk cities face challenges, we note that this is not necessarily the case in all such cities. In particular, policyholders we contacted with properties in Washington D.C. said while it may have been difficult or more expensive to obtain terrorism coverage immediately following September 11, coverage is now readily available and affordable. For example, policyholders we interviewed that own properties in the city said they were able to include full terrorism coverage in their all-risk property policies even though they own or manage commercial and residential properties in proximity to potential targets such as the White House, the Capitol, subway stops, or foreign embassies. Industry participants said that policyholders generally experience fewer challenges in Washington, D.C. because the buildings are not as high or as densely concentrated as in downtown areas of other high-risk cities. Policyholders that have experienced initial difficulty obtaining terrorism coverage in their primary all-risk property policies generally have been able to meet current terrorism insurance requirements by one of several approaches or a combination thereof, according to industry participants. For example, some policyholders and brokers reported obtaining coverage from a greater number of insurers in what may already have been a complex insurance program. As discussed earlier, policies with high insured values can require multiple insurers to provide portions of coverage up to the full amount of the policy. However, a few policyholders told us more insurers are now required to assemble terrorism coverage because insurers are taking smaller amounts of risk (that is, offering smaller amounts of coverage), requiring a greater number of insurers to fill out an insurance program and adding to what can be a time-consuming and complicated process for policyholders and their insurance brokers. Some policyholders said more than 20 insurers may participate in a single insurance program. One policyholder told us more than 40 insurers participate in its property insurance policy. Layering an insurance program has costs, especially for large and complex programs. A representative of a large hotel chain told us that layering insurance is “painful” because of the effort involved in convincing insurers to become comfortable with a risk. Moreover, several brokers and policyholders reported purchasing property terrorism insurance in a stand-alone policy to cover portions or all of the required coverage. For example, the owner of multiple large office buildings in Manhattan’s midtown and downtown financial districts told us the company purchased all of its terrorism coverage as a stand-alone insurance policy because it could obtain just half of the $800 million in coverage sought. Another policyholder that owns a nationwide chain of hotels, with properties in Manhattan, Chicago, and San Francisco, decided to purchase all of its terrorism coverage in a stand-alone policy to avoid the high and inconsistent cost of embedding terrorism coverage in its all- risk policy. A representative of this policyholder noted that cost was a particular issue following the 2005 hurricane season when property insurance prices generally increased. Some policyholders told us stand- alone terrorism coverage was more expensive than obtaining coverage as part of an all-risk property policy. However, data from a national broker show that the difference in pricing between stand-alone coverage and coverage included in an all-risk policy was small for most of 2007, with the median price for stand-alone coverage at 5 percent of the overall property premium compared to around 4 percent for coverage in the all-risk program. Finally, according to brokers and policyholders some policyholders have used self-insurance as a means to assemble coverage. That is, they placed all or a portion of their terrorism coverage in a captive insurance company, which insures the risks of the owner. For the purpose of insuring property terrorism risk, a captive insurer would generally be a wholly owned insurance company within the corporate structure of the property owner. The typical owners of captives used for insuring terrorism risk are large corporations that own large or well-known buildings in major urban areas and have not been able to obtain coverage through other means. For example, a policyholder we contacted sought to obtain $1.2 billion in property coverage for multiple buildings in Manhattan, including terrorism coverage, which would cover the total replacement cost of the largest building in its portfolio. However, a representative of this policyholder told us the company could obtain just $500 million of all- risk property insurance that included terrorism coverage, leaving a gap of $700 million in coverage for terrorism risk. The policyholder considered filling the gap by obtaining terrorism coverage in the form of a more expensive stand-alone insurance policy, but decided instead to establish a captive insurance company to supplement the coverage provided in the all-risk policy and make up the $700 million difference. Another policyholder with a lender requirement to purchase about $1.6 billion in coverage on a single building in midtown Manhattan was unable to obtain sufficient terrorism coverage in an all-risk policy in 2008. This policyholder purchased an all-risk policy that excluded terrorism risk and assembled property coverage for terrorism risk in the form of a $250 million stand- alone policy and about $1.3 billion in a newly formed captive insurance company. Although these examples show policyholders may create captive insurance companies for the sole purpose of insuring terrorism risk, this approach may not be typical of the way in which captives are used. Representatives of two large insurance brokers said most companies simply add terrorism risk to captives that already have been established to cover other insurance risks, such as environmental and product-recall risks. While TRIA limits insurers’ potential losses from a terrorist attack, the efforts of insurers’ to manage the remaining risks they faced appeared to be the primary reasons for certain policyholders experiencing initial challenges in obtaining desired amounts of coverage at prices they viewed as reasonable. To mitigate their risks, many insurers set limits on the amount of coverage that they would provide to policyholders in confined geographic areas within a city, such as downtown locations or financial districts where many large buildings are clustered, or in specific areas of cities considered to be at high risk of attack. According to a variety of sources we contacted, these limits generally make obtaining coverage more difficult or costly for certain policyholders in these areas. Further, industry participants and analysts said that the availability of reinsurance and the views of credit rating agencies also may limit the supply and increase the price of terrorism insurance coverage in certain high-risk cities. Representatives from several insurance companies we contacted said that despite the TRIA financial backstop, they remain significantly concerned that a future terrorist attack would result in substantial losses. In the event of another terrorist attack, industry participants said that certain large insurers may face TRIA deductibles that would result in losses of billions of dollars. For example, one of the largest insurers providing commercial property coverage would face a $5 billion TRIA deductible based on 2007 data. The representative of one large insurer said that the company’s TRIA deductible was three times the net losses the company suffered due to the September 11 attacks. Furthermore, even a terrorist attack that caused losses below the $100 million TRIA program trigger could cause substantial losses to a small insurer. For example, the company surplus might be exhausted from paying the entire loss, according to the representative of a small insurer. Insurers said that they seek to mitigate potential losses from a single terrorism attack by limiting the amount of property coverage that they offer in confined geographic areas within cities. For example, some insurers told us that they would not insure certain types of properties, buildings over a certain size, or buildings near others that might be considered terrorist targets. In addition, several large insurers and brokers told us that insurers limit the terrorism insurance they provide in these areas to amounts well below their TRIA deductible. To help insurers determine how much risk, or coverage, they can write in any given location, several industry participants we interviewed said insurers often use computer models to estimate the effect, or severity, of terrorist attacks on their existing book of business. Using models available from risk-modeling firms, insurers can map the locations of properties they cover as well as other types of coverage they provide in the area such as building contents, business interruption, or workers’ compensation. Therefore, insurers can consider the extent to which one terrorist attack could trigger losses among multiple lines of insurance. The models also can map the locations of nearby properties considered to be potential terrorist targets. With these mapped locations, an insurer is then able to identify areas where it has the greatest aggregated exposure within a city. The modeling program places a circle around a specific location, such as a building in the insurer’s book or a potential terrorist target, and aggregates the amount of exposure an insurer has within this defined area. These models take into account the severity of various attack scenarios on properties in the area (for example, a 5- or 10-ton truck bomb) and allow users to quantify potential losses under different attack scenarios. Insurers we interviewed noted that they are not as comfortable with the estimates of the probability, or frequency, of an attack, from these models and, therefore, make more limited use of this information. While insurers and risk-modeling firms have access to large historical databases and scientific studies of the frequency and severity of natural catastrophes, such as hurricanes, the data on terrorist attacks are limited. Furthermore, according to industry analysts, the tactics, strength, and effectiveness of terrorist groups can be very unpredictable, so predicting the frequency of such attacks is very difficult and perhaps impossible. For example, terrorists might respond to increased security measures in one area by shifting attention to more vulnerable targets in another. Without more information, industry analysts note that it is difficult for modeling firms to make projections about the capability and opportunities of terrorists to undertake future attacks. While insurers find estimates of the probability of a terrorist attack of limited use, they often use the estimates of the severity of potential attacks in determining the amount of coverage they are willing to provide. Considering potential attack scenarios and estimated losses from the models, insurers impose internal limits, referred to here as aggregation limits, on the amount of all types of coverage they will offer in defined areas. Depending on the amount of capital and risk tolerance of the company, insurers determine the amount of coverage they are willing to provide in defined geographic areas within a city, such as in 250-foot, 500- foot, or quarter-mile circles around certain landmarks or areas where the insurer has high concentrations of risk. Insurers then monitor the amount of coverage that they provide in these areas on an ongoing basis to ensure that they do not exceed their aggregation limits. As shown in figure 3, an insurer might decline to provide any coverage for a new property since adding the property to the book of business would exceed the insurer’s aggregation limit on exposures within the defined area. Alternatively, an insurer might charge a higher price or offer a lower coverage limit if adding the property would exceed the aggregation limit. The amount of coverage insurers are willing to provide in these defined areas may change frequently as new clients or properties are added to or removed from their books of business. An insurer may have available capacity in a specific area one month, but be near its limit the next. For example, one policyholder noted that her real estate investment company contacts its insurer before considering acquiring a new property to determine if the insurer has capacity where the new property is located. Although the insurance company may decide it can provide property insurance for the building at the time of the request, the policyholder said that when the acquisition is completed several months later, the insurance company may no longer have the capacity available to insure the building. In that case, the policyholder said that the company might have to purchase a stand-alone terrorism policy for that particular building, which the policyholder reported as being more expensive than simply adding it to the existing portfolio. As a result, the policyholder said it might no longer be profitable for the company to acquire the new building. In some cases, this policyholder said the company has canceled or deferred an acquisition until it simultaneously disposed of a building in the same area to be sure that the insurer would have capacity available for the new building. However, several other policyholders we interviewed said that any concern about the availability of insurance has not affected their companies’ acquisitions or development projects. Insurers and other industry analysts cited the limited availability of reinsurance as another factor influencing insurers’ willingness to provide terrorism coverage in certain areas. Reinsurance plays a crucial role in insurance markets by permitting primary insurers to transfer some of the risks that they incur in offering coverage. In so doing, reinsurance may allow primary insurers to offer additional coverage than otherwise would be the case while mitigating potential losses. Insurers and other industry participants we contacted said that reinsurance for terrorism risk, which largely was unavailable after September 11, continues to be expensive and available in limited amounts. In a 2004 report, we found that reinsurers had reentered the terrorism insurance market cautiously, but that the amount of coverage offered to primary insurers was limited and the premium rates were viewed as high. In conducting our current work, reinsurers and industry analysts said that reinsurance capacity for terrorism has continued to increase for a variety of reasons including an influx of new capital into the industry, the absence of another terrorist attack, and improvements in insurers’ ability to underwrite the risk. However, insurance brokers and large insurers with significant exposures in urban areas told us that terrorism often still is excluded in reinsurance contracts and that insurers have been able to purchase only limited amounts of very expensive coverage. A recent Congressional Budget Office report similarly found that the ability of primary insurers to transfer terrorism risk to reinsurers is limited. As has been the case with primary insurers, the efforts of reinsurers to manage their aggregation levels appear to be why the coverage that they offer for terrorism is limited. The provision in TRIA requiring insurers to offer terrorism coverage at terms and conditions that do not differ materially from other coverage does not apply to reinsurance transactions, so these companies have discretion in deciding how much terrorism coverage to offer to primary companies. Reinsurance company representatives told us that the location of the insured risks is an important factor that influences whether they will offer reinsurance and at what price. For example, one reinsurance company representative said that the company was less willing to write contracts covering properties in cities viewed to be at high risk of terrorist attack. Others said that while their companies still would be willing to reinsure an insurer’s book of business with concentrations of risk in multiple high-risk cities, they might offer more expensive coverage to compensate for the increased risk and the increased capital they need to maintain to back up the risk. Insurers and reinsurers cited the views of rating agencies on the amount of capital insurers allocate to terrorism risk and the location of risks they insure as other factors influencing their willingness to provide terrorism coverage. Rating agencies assess the financial strength of companies and the credit quality of their obligations. Maintaining a high rating can be very important for an insurance company’s business because a firm with a low rating may, among other things, pay a higher interest rate on its debt. In addition, several policyholders and lenders told us many lenders that require their mortgagees to carry terrorism coverage also require that they use only highly rated insurers. A variety of industry participants and analysts told us that rating agencies’ views can be very influential on the amount of capacity insurers decide to allocate to terrorism risk, affecting how much coverage they provide to policyholders. For example, one reinsurance industry analyst noted that the amount of capital rating agencies required insurers to maintain to support terrorism risk was significant. The representative said that these requirements may encourage insurers not to offer this type of business because it is difficult to maintain large amounts of capital and earn an adequate return on the money. In conducting their assessments, representatives of the rating agencies we interviewed said they look closely at insurers’ terrorism exposures. They request specific information about the types of policies insurers write, the risks in their books of business, the steps insurers take to manage their risks, and whether they have concentrations of risk in any areas, including large urban areas or cities considered to be high risk. With workers’- compensation insurers, the rating agencies request information about the number of employees at different locations across the different insureds. As a result of discussions with the rating agency about the company’s rating, rating agency representatives said that some companies have purchased additional reinsurance or divested risk. Insurance industry participants and analysts did not express consensus on whether TRIA should be modified or additional actions taken to increase the availability of terrorism insurance coverage. They cited a variety of advantages and disadvantages associated with five proposals that have been offered in legislation, discussed in our prior reports, or suggested by industry participants to increase the availability and perhaps limit the cost of terrorism insurance. These proposals include lowering insurers’ TRIA deductibles following large terrorist attacks, permitting insurers to establish tax-deductible reserves for future terrorism losses, forming a group of insurance companies to pool assets for terrorism losses, facilitating the issuance of onshore catastrophe bonds through changes in the tax code, and limiting certain state regulations and requirements. We note that improvements in terrorism insurance coverage and pricing that might result from the adoption of some of these proposals (such as tax- deductible reserves, insurance pools, and catastrophe bonds) likely would take place over the longer term and that such proposals could increase the federal government’s exposure to terrorist-related losses or otherwise reduce federal revenues. One recent legislative proposal to increase the availability of terrorism insurance coverage involved lowering the TRIA deductible for insurers from future terrorist attacks after they experience losses. Under this proposal, if there were a terrorist attack that resulted in more than $1 billion in damages, the insurer deductible under TRIA immediately would be reduced to 5 percent (from 20 percent) for those insurers suffering losses in the attack. Table 1 below shows the potential effect on the deductibles of five large insurers under this proposal. Because this proposal was designed to significantly reduce potential industry losses, some insurers and industry participants we contacted said that it might make them more willing to offer coverage in areas affected by a future attack. As a result, supporters of the proposal argue that it would stabilize insurance markets in affected areas and facilitate rebuilding and recovery efforts. Moreover, the representative of one large insurer said that if the deductible was lowered to 5 percent, the insurer immediately would be willing to write more terrorism coverage, especially in downtown areas of larger cities. Since the insurer would be able to access the federal reimbursement at a lower level, the insurer’s potential losses on its current book of business would be lower, thus freeing up additional capacity for terrorism coverage without having to purchase reinsurance from the private market to cover the additional risk. While other insurers and industry participants we contacted were not necessarily opposed to this proposal, they remarked that its effects might be limited. As discussed previously, some large insurers already try to limit potential losses associated with a future terrorist attack to levels well below their current TRIA deductible of 20 percent of direct premiums. Therefore, it is not clear what effect lowering the TRIA deductible would have for such insurers in terms of the terrorism coverage that they are willing to offer. Second, as also discussed earlier, there may be significant market disruptions associated with another terrorist attack, which could limit coverage availability even if the federal government did assume greater liability for associated losses. For example, reinsurers, which are not subject to TRIA’s requirements to make terrorism coverage available, again might limit the coverage they were willing to provide in the wake of another attack, which might limit the amount of coverage that primary insurers could offer. In addition, as happened following Hurricane Katrina, ratings agencies might increase the capital requirements or other standards insurers must follow to maintain and improve their ratings, potentially further limiting insurers’ willingness to continue providing terrorism coverage in certain areas. Further, we note that lowering the TRIA deductible would increase the federal government’s potential liability for terrorism-related losses. Another option would permit insurers to establish tax-deductible reserves, over a period of years, to cover the potential losses associated with future terrorist attacks. Under current federal tax law, insurers can take a deduction for losses that already have occurred and for setting aside reserves for fair and reasonable estimates of the amount the insurer will be required to pay on future losses. However, reserves for uncertain future losses are not currently tax deductible. Because the size and timing of terrorist attacks are uncertain, any reserves set aside for potential terrorism losses would be taxed as corporate income in the year in which they were set aside. We have reported previously that amending the tax code and permitting insurers to establish tax-deductible reserves could provide insurers with financial incentives to increase their capital and thereby expand their capacity to cover catastrophic risks, such as terrorism. We also reported that supporters of this proposal argued that establishing such reserves would lower the costs associated with providing coverage and encourage insurers to charge lower premiums, which could increase coverage among policyholders. In addition, industry participants we interviewed said if insurers were able to establish tax-deductible reserves, a large terrorist attack could cause less of a strain or shock to industry surplus, or capital, which could help prevent insurer insolvencies in the wake of an attack. However, several important challenges and tradeoffs may be associated with this option. For example, some industry participants we contacted said it would be difficult for insurers to determine the amount of funds to contribute to such a reserve each year because of the significant challenges associated with estimating the frequency of potential terrorist attacks. Without a reliable method for conducting such estimates, insurers would lack an analytical basis for reserving funds to cover potential losses. Furthermore, we have reported that overall terrorism insurance capacity might not increase because insurers might use the reserves as a substitute for reinsurance that may have been purchased previously to manage the risks of potential terrorist attacks (reinsurance premiums are already tax- deductible). Because reserving also would convey tax advantages, some insurers might feel that they could limit the expense of purchasing reinsurance. To the extent that insurers reduced their reinsurance coverage in favor of tax-deductible reserves, the industry’s overall capacity would not necessarily increase. Insurers also might use the reserves to shield a portion of their existing capital (or retained earnings) from the corporate income tax or inappropriately use tax-deductible reserves to manage their financial statements by increasing the reserves during good economic times and decreasing them in bad times. Finally, we note that this proposal likely would reduce federal tax revenues. Another proposal involves establishing a group of insurance companies to pool their assets, which may allow them to provide a greater amount of terrorism insurance coverage than could be provided by individual companies acting independently of one another. Insurance pools typically are formed to cover large risks, such as hurricanes, which traditional insurance markets do not address readily. For example, a pool could be created at the national level or state level; it could involve mandatory or voluntary participation from insurers; it could be prefunded or postfunded; and if losses exceed the reserves of the pool, the government could provide a financial guarantee or the pool could draw on some other method such as issuing bonds or borrowing funds to make up any shortfall. Table 2 shows that insurance pools have been established in Florida to cover hurricane risks and in the United Kingdom for terrorism risks. In addition to these programs, one large insurance broker, in consultation with several industry groups, has developed a proposal to form a $40 billion national reinsurance pool for commercial property terrorism risk. Under this proposal, all insurance policies would cover losses from acts of terrorism and insurers would continue to charge policyholders their own rates for terrorism coverage in accordance with state laws. Insurers would purchase reinsurance coverage from the pool, which would determine its reinsurance premium rate based on analysis of a range of potential losses in urban, suburban, and rural areas. The claim reserves of the pool would be tax-exempt, allowing it to accumulate reserves tax-free from which to pay future losses. In the event of a certified terrorist attack, the insurance industry would pay 5 percent of losses and the pool would pay 95 percent of losses up to $40 billion. In the event the pool did not have the resources to pay its share of losses, the pool would be funded through the issuance of bonds. The federal government would be responsible for losses in excess of $40 billion up to $100 billion. According to the plan, losses above $100 billion would be reviewed by Congress. Some industry participants we contacted expressed general support of an insurer pool to enhance the availability of terrorism insurance coverage. For example, they said a pool could allow insurers to transfer a significant portion of their terrorism-related risk to an outside entity over time, and they could use the accumulated surplus in the pool to provide higher amounts of coverage in the future. Industry participants also noted that a national pool would spread out terrorism risk across a wider base of policyholders of varying risk levels than individual insurers could do alone and would allow insurers to better manage their total accumulations of terrorism risk. However, several challenges and disadvantages also may be associated with this option. For example, as is the case with tax-deductible reserves, it may be difficult to develop a reliable basis for determining the appropriate size of the pool because of the inherent challenges in estimating the frequency of terrorist attacks. Moreover, other information suggests that insurance pools would not necessarily increase the industry’s capacity or ability to offer additional terrorism insurance coverage. According to a study by a global consulting firm on a proposed workers’-compensation pool for terrorism risk and other industry participants, a reinsurance pool might not create new industry capacity or bring in additional capital to support writing more business. The study notes that if the industry as a whole does not have enough capital to manage terrorism risk, then neither can an industry pool that simply combines existing industry capital in a new structure. Furthermore, we note that if premiums paid to the pool were tax deductible as are traditional reinsurance premiums, insurers simply might substitute pool reinsurance for traditional reinsurance, as might be the case with tax- deductible reserves for individual insurers. Finally, if the pool was a tax- exempt entity, tax-deductible reserves for an insurance pool could reduce federal revenues. Another proposal is that the federal government establish certain tax advantages for catastrophe bonds, which supporters argue could facilitate their use for covering terrorist attacks. Catastrophe bonds generally have been issued to cover natural events, such as earthquakes or hurricanes, rather than terrorist attacks and historically have been created in offshore jurisdictions where they are not subject to income or other tax. Under this proposal, tax treatment of catastrophe bonds would be similar to the treatment received by certain issuers of asset-backed securities, which generally are not subject to tax on the income from underlying assets that is passed on to investors. We previously reported that the total costs of issuing catastrophe bonds—including transaction costs such as legal fees—significantly exceed the costs of traditional reinsurance, which may have limited the expansion of the market. Facilitating the creation of onshore transactions by changing the tax code to encourage issuance of catastrophe bonds within the United States could reduce transaction costs. Some insurance industry participants we contacted said that catastrophe bonds, by tapping into the securities markets, offered the opportunity to expand the pool of capital available to cover terrorism risk. They also said that amending the tax code to facilitate the bonds’ issuance in the United States could be beneficial in achieving that goal. However, many industry participants said, consistent with findings in our previous reports, that the development of catastrophe bonds for terrorism risks involves significant challenges. These challenges may greatly exceed any benefit that would be derived from amending the tax code. As with other options discussed previously, the industry participants said that because of the difficulties associated with estimating the frequency of terrorist attacks, it would be very difficult to structure a catastrophe bond for terrorism that would be acceptable to investors. Data are available on the historical frequency and severity of natural events, such as hurricanes and earthquakes, which helps investors assess the risks that they face in purchasing catastrophe bonds for such risks. Without similar data for terrorist attacks, it is unlikely that a viable market for catastrophe bonds will be established regardless of revisions to the tax code that are designed to help ensure such an outcome. We also have previously reported that the federal government could lose tax revenue under this option and that the proposed changes to the tax code might create pressure from other industries for similar tax treatment. Some industry participants have suggested that states could take certain actions to revise their insurance statutes or regulations to increase insurer capacity for terrorism risk, including amending rate regulation policies and laws on coverage requirements. While, according to information from NAIC, most state insurance regulators do not review rates for large commercial property/casualty insurance contracts, several insurance company representatives said that their ability to charge risk-based prices for terrorism coverage was constrained by insurance statutes and regulations in certain states and the prices these states approved did not reflect the risk to which the insurers were exposed. Additionally, a few industry participants said that terrorism insurance availability may be limited in states that have adopted the Standard Fire Policy (SFP). Under the SFP, property insurers are required to cover losses from fire regardless of the cause of the fire, including a terrorist attack, even if the policyholder declined terrorism coverage. Consequently, the industry participants said that the SFP influences the amount of property insurance that insurers provide, including terrorism insurance, and the premiums that they charge in states that have adopted it. Therefore, some insurers have suggested that states amend their SFP statutes so that insurers would not be responsible for fire losses resulting from terrorism. While most states do not regulate prices for commercial property risks, where prices are regulated the state regulators are unlikely to disapprove insurers’ rate requests because insurers are in a better position to judge the necessity of the price than the regulator, as long as the request is generally in line with current market prices, according to a representative from NAIC. In addition, other available information suggests that state actions on rate regulation or coverage requirements may have a limited effect on the availability of terrorism insurance coverage. As discussed in this report, some policyholders, particularly in Manhattan, may face initial challenges in obtaining terrorism coverage at prices viewed as reasonable. However, according to state regulatory officials, New York is one of the states that generally does not regulate premium rates for large commercial properties, so state regulation does not appear to be a significant factor in the city where insurance challenges appear to be most pronounced. On the other hand, unlike several other states, New York and California have not revised the SFP to limit insurer liability resulting from the fires associated with terrorist attacks, according to information from industry analysts. While the SFP may therefore have an influence on the availability of terrorism insurance in such locations as Manhattan and San Francisco, it is difficult if not impossible to determine the influence as compared to other factors in these cities, particularly the potential losses associated with attacks on high-value buildings that may be in proximity to one another. We provided a draft of this report to Department of the Treasury and NAIC for their review and comment. In oral comments, Treasury and NAIC officials said that the report was informative and useful. They also provided technical comments that were incorporated where appropriate. We are sending copies of this report to the Department of the Treasury, NAIC, and other interested committees and parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. Our objectives were to describe (1) whether the availability of terrorism insurance for commercial properties is constrained in any geographic markets and the effect of any constraints on pricing and coverage amounts, (2) factors limiting insurers’ willingness to provide coverage, and (3) advantages and disadvantages of some public policy options to increase the availability of property terrorism insurance. To assess whether the availability of terrorism insurance for commercial properties is constrained in any geographic markets and the effect of any constraints on pricing and coverage amounts, we reviewed relevant literature and compiled and analyzed available data on insurance and reinsurance industry capacity, terrorism insurance take-up rates, and terrorism insurance pricing. We also interviewed representatives of more than 100 organizations with knowledge of the nationwide terrorism insurance market and with expertise in specific geographic markets. Entities with a national perspective included insurer and policyholder trade associations, individual policyholders, national insurance and reinsurance brokers, and insurance and reinsurance companies. We obtained information on specific geographic markets from state regulators, regional insurance brokers and insurance companies, and local property owners. The geographic markets we studied represent locations considered to be at high, moderate, and low risk of exposure to terrorist attacks—Atlanta, Boston, Chicago, New York, San Francisco, and Washington, D.C. We selected these markets based on rankings of locations by risk of terrorism exposure that accounts for the risk of terrorist attacks and the potential for associated losses from the Insurance Services Office, an insurance industry analytics firm. We spoke with representatives of policyholders that own hundreds of properties nationwide, including more than 200 properties in New York City, more than 100 properties in Washington, D.C., at least 30 properties each in Chicago and San Francisco, about 30 properties in Boston and 60 in Atlanta, and numerous properties across the United States including major cities such as Los Angeles and Houston. These properties included large office towers in major U.S. cities, properties in proximity to high-profile federal buildings, hotels, industrial buildings, hospitals, sports stadiums, and residential properties in locations throughout the United States. The policyholders also represented a variety of industries that included real estate, transportation, financial services, health, hospitality, and entertainment. In addition to one-on-one interviews, we also conducted group discussions with representatives of 14 policyholders at the annual Risk and Insurance Management Society conference in San Diego, California, in April 2008. Although we selected industry participants to provide broad representation of market conditions geographically and by industry, their responses may not necessarily be representative of the universe of insurers, insurance brokers, policyholders, and regulators. As a result, we could not generalize the results of our analysis to the entire national market for commercial property terrorism insurance. We determined that the selection of these sites and participants was appropriate for our objectives and that this selection would allow coverage of geographic areas, key markets, major insurers and policyholders, and other organizations related to terrorism insurance so as to generate valid and reliable evidence to support our work. To identify the factors limiting insurers’ willingness to provide terrorism insurance coverage, we selected large, national insurance companies to interview based on their market share in the states we studied. These national insurance companies held from 37 to 52 percent of the market share in the states we studied, according to information provided by the Insurance Information Institute. In addition, we interviewed representatives of regional insurance companies in our selected markets. We also spoke to representatives of seven reinsurance companies, including two of the largest worldwide reinsurance companies, risk modeling firms, state regulators, and two credit rating agencies. Although we selected insurers to provide broad representation of size and geographic scope, we could not generalize the results of our analysis to the entire population of commercial property insurers. To explore the advantages and disadvantages of some public policy options to increase the availability of property terrorism insurance, we relied on our interviews with the industry participants described above. We also interviewed academics who have written on the topic of terrorism insurance, and representatives of research organizations and consumer interest groups. We selected the option that would reduce insurers’ TRIA deductibles in areas affected by a future large terrorist attack from two recent legislative proposals. We selected the other options—allowing insurers to establish tax-deductible reserves, forming a group of insurance companies to pool assets, and facilitating the use of catastrophe bonds through changes in the tax code and amending state regulations or statutes—from literature we reviewed, our prior reports, and interviews we conducted with industry participants. The selected options were representative of the range of possible options. We did not attempt to evaluate the prospective effect of these options and, therefore, did not come to any conclusions about the advisability of implementing these options. We conducted this audit in Atlanta, Georgia; Boston, Massachusetts; Chicago, Illinois; New York, New York; San Diego, California; San Francisco, California; and Washington, D.C., from January 2008 to September 2008, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Wesley M. Phillips, Assistant Director; Farah Angersola; Joseph A. Applebaum; Rudy Chatlos; Andrea Clark; Katherine Bittinger Eikel; Barry Kirby; Rich LaMore; Marc Molino; Jill M. Naamane; Linda Rego; Barbara Roesmann; Kathryn Supinski; Thomas Taydus; and Shamiah Woods made key contributions to this report.
The Terrorism Risk Insurance Act of 2002 (TRIA) specifies that the federal government assume significant financial responsibility for insured losses on commercial properties resulting from future terrorist attacks. While TRIA has been credited with stabilizing markets for terrorism insurance after the September 11, 2001, attacks, questions remain as to whether certain policyholders, especially those located in large urban areas viewed as being at high risk of attack, may still face challenges in obtaining coverage. GAO was asked to conduct a study to describe (1) whether the availability of terrorism insurance for commercial properties is constrained in any geographic markets, (2) factors limiting insurers' willingness to provide coverage, and (3) advantages and disadvantages of selected public policy options to increase the availability of such insurance. To address these objectives, GAO analyzed available data and interviewed industry participants, including those with expertise in specific geographic markets considered to be at high, moderate, or low risk of attack (Atlanta, Boston, Chicago, New York, San Francisco, and Washington, D.C.). GAO provided a draft of this report to the Department of the Treasury and the National Association of Insurance Commissioners (NAIC). Treasury and NAIC said the report was informative and useful. While some owners of high-value properties in major cities may face initial challenges obtaining terrorism insurance coverage compared with most policyholders nationwide, they generally have reported that they could meet current coverage requirements through a variety of approaches. Many industry participants said that terrorism insurance is currently available nationwide at prices viewed as reasonable and that the TRIA program was a key reason for these favorable conditions. However, some policyholders that own large, high-value properties in densely built urban areas viewed as at high risk of attack, particularly in Manhattan and to a lesser extent in Chicago and San Francisco, may still face initial challenges obtaining desired amounts of coverage at prices viewed as reasonable, according to industry participants. To address these challenges, some policyholders purchased coverage from a large number of insurers, which can be a time-consuming and complicated process for policyholders and their insurance brokers. Others purchased coverage in a separate policy (rather than as part of an overall property insurance package) which may be more costly, or self-insured. While TRIA specifies that the federal government assume substantial financial responsibility for insured losses associated with future terrorist attacks, the steps insurers take to manage the risks they do face appear to be the primary reason some policyholders face challenges in obtaining coverage. Insurers said they seek to mitigate potential terrorism losses by limiting the amount of property coverage that they offered in specific areas of cities, such as downtown locations or areas considered to be at high risk of attack. These risk mitigation efforts generally make obtaining coverage more difficult or costly for policyholders with high-value properties in these areas, according to a variety of sources GAO contacted. Industry participants also said that the availability of reinsurance (insurance for insurers) and the views of rating agencies can limit the availability of coverage in such cities. Industry participants had no consensus on whether TRIA should be modified or additional actions taken to increase the availability of terrorism coverage, and identified advantages and disadvantages of selected policy proposals that have been included in legislation, discussed in prior GAO reports, or suggested by industry participants to increase such coverage. A proposal to increase the federal government's current responsibility under TRIA for the insured losses associated with a future attack could make insurers more willing to offer coverage in affected areas. For example, one large insurer said that the proposal might make the company more willing to immediately offer additional coverage in cities viewed as at high risk of attack. However, any such benefits might be limited for reasons including the widespread insurance market disruptions that may result from another attack. This proposal, along with several other proposals analyzed in the report, also would increase the federal government's exposure to the losses associated with terrorist attacks, which is already 85 percent of losses up to $100 billion annually, after an industry deductible.
Government agencies at all levels have already implemented a broad array of e-government applications: through the Internet, government agencies collect and disseminate information and forms; government and businesses order and pay for goods and services; and businesses and the public apply for licenses, grants, and benefits, and submit bids and proposals. Despite this substantial progress, the federal government has not yet taken full advantage of the potential that electronic government offers. As we have previously testified, the government faces significant challenges in this area, including sustaining executive leadership, protecting personal privacy, implementing appropriate security controls, using enterprise architectures effectively, and managing IT human capital. Recognizing the magnitude of challenges facing the federal government, the Congress has enacted important legislation to guide the development of e-government. In 1998, the Government Paperwork Elimination Act (GPEA) was enacted, establishing a requirement that by October 21, 2003, federal agencies provide the public, when practicable, the option of submitting, maintaining, and disclosing required information electronically. More recently, the Congress passed the E-Government Act of 2002, which includes provisions to promote the use of the Internet and other information technologies to provide government services electronically; strengthen agency information security; and define how to manage the federal government’s growing IT human capital needs. In addition, this act established an Office of Electronic Government within OMB to provide strong central leadership and full-time commitment to promoting and implementing e-government. The executive branch has also acted to enhance and accelerate the development of electronic government. The President made e-government expansion one of five top priorities in his fiscal year 2002 management agenda, which outlines a number of specific electronic government projects. For example, the FirstGov Web portal—which is intended to serve as a single consolidated source for government services to citizens— was targeted for expansion and improvement to offer services better organized according to citizens’ needs. Also targeted for enhancement was the FedBizOpps portal, designed to be a single point of entry for information about federal government procurement opportunities. Further, the agenda endorsed the establishment of a federal public key infrastructure to ensure that electronic transactions with and within the federal government would be private and secure. A major element of the President’s management agenda was establishment of the Quicksilver Task Force, which was charged with identifying (1) systematic barriers that had blocked the deployment of e-government advances and (2) electronic government projects that could deliver significant productivity and performance gains across government. Together, the federal government’s e-government initiatives are expected to provide high-quality customer services regardless of whether the citizen contacts the agency by phone, in person, or on the Web; reduce the expense and difficulty of doing business with the government; cut government operating costs; provide citizens with readier access to government services; increase access for persons with disabilities to agency Web sites and E- government applications; and make government more transparent and accountable. In its e-government strategy, released in 2002, OMB stated that the 25 e- government initiatives were selected on the basis of (1) value to citizens, (2) potential improvement in agency efficiency, and (3) likelihood of deploying within 18 to 24 months. The selected initiatives would achieve their results by simplifying and unifying agency work processes and information flows, providing one-stop services to citizens, and enabling information to be collected on line once and reused, rather than being collected many times. The initiatives are aimed at providing a wide variety of services. For example, some are focused on setting up Web sites or portals that channel information more effectively to citizens, businesses, or other government entities. Recreation One-Stop is one such example, a Web portal for a single point of access to information about parks and other recreation venues at the federal, state, and local levels. One-Stop Business Compliance provides an analogous service to businesses, giving them a single Web site to consult regarding the multitude of government regulations that may affect their activities. Other initiatives strive for more ambitious services that may not necessarily rely on the Internet for delivery. SAFECOM, for example, seeks to impose order and standards on wireless communications among emergency responders across all levels of government. The e-Payroll initiative is intended to consolidate the federal government’s many incompatible payroll systems into just two that would service all government employees. As shown in figure 1, OMB has divided these efforts into five broad categories that reflect the different customer groups targeted by each of the initiatives: 1. government to individual citizens, 2. government to business, 3. government to government, 4. internal efficiency and effectiveness, and 5. cross cutting. Government to individual citizens. One of the major benefits of on-line and Internet-based services is that they provide opportunities for greater citizen access to and interaction with the federal government. An example is GovBenefits.gov, a Web site designed to assist users in locating and determining potential eligibility for government benefits and services. Other initiatives in this category aim to improve customer service. USA Services, for example, is intended to deploy tools, such as call centers and coordinated E-mail systems linked to the FirstGov Web site, that will enable citizens to ask questions and receive responses from the federal government without having to know in advance which specific departments or bureaus have responsibilities related to their areas of interest. Government to business. Initiatives in this category seek to reduce the reporting burden on businesses by adopting processes that eliminate redundant data collection, provide one-stop access to information, and enable communication using electronic business standards, such as the Extensible Markup Language. The Expanding Electronic Tax Products for Businesses initiative, for example, seeks to reduce the number of tax- related forms businesses must file. The Federal Asset Sales initiative aims to create a single electronic interface for businesses to find and buy government assets. Government to government. The primary goal of these initiatives is to enable federal, state, and local governments to more easily work together to better serve citizens within key lines of business. For example, Geospatial One-Stop seeks to provide a single portal for accessing standardized and coordinated federal, state, and local geospatial data. The Disaster Management initiative seeks to provide federal, state, and local emergency managers on-line access to disaster management information, planning, and response tools. Internal efficiency and effectiveness. The initiatives in this category seek to improve the performance and reduce the costs of federal government administration by using e-business best practices. For example, the Integrated Acquisition Environment initiative seeks to consolidate business processes and information to facilitate cost-effective acquisition of goods and services across the federal government. Lastly, e-Travel is planned to streamline the administration of government travel by creating a governmentwide Web-based travel management process. Cross-cutting initiative. The e-Authentication initiative is to develop common interoperable authentication techniques to support all the other initiatives. Authentication refers to the critical process of confirming the identity of the participants in an electronic transaction. Without a means to satisfactorily establish identities, e-government transactions are too risky, and the potential of e-government to transform citizen services remains severely constrained. The initiative plans to provide authentication services through an electronic “gateway,” which will offer different assurance levels to meet the varying needs of the other projects. While several of the projects have already achieved tangible results, not all of them are making the same degree of progress. For example, some have had major management changes—management of the SAFECOM initiative, for example, was transferred from Treasury to the Federal Emergency Management Agency. Major management changes such as this have led to delays in project milestones and changes in objectives. We believe that fluctuations such as these indicate a need for oversight to ensure that the larger goal—to realize the full potential of e-government— is not jeopardized. When we reviewed project-planning documentation collected by OMB from each of the initiatives, we found indications that important aspects of some of the initiatives had not been addressed and that, for many of them, funding strategies and milestones were in a state of flux. These findings add urgency to our concern that the initiatives be carefully monitored to ensure that implementation challenges are identified and addressed as quickly as possible. I would like to go through some of the specific results of our analysis now. As part of OMB’s selection process, the Quicksilver task force screened over 350 project ideas during the summer of 2001 and selected 34 potential project proposals for more in-depth consideration. In September 2001, task force members developed brief (or “mini”) business cases for each of the 34 proposals. According to OMB officials, these mini business cases were to include all the information necessary to enable sound selection decisions. The task force reviewed the mini business cases and the final selections were made in October. We analyzed the mini business cases, which were prepared for 23 of the 25 initiatives, to determine whether they were complete. To conduct our analysis, we first identified e-government business case “best practices” as cited by federal agencies, private sector and academic researchers, and state and local governments. From these sources, we compiled the most frequently cited elements of a complete business case, such as a description of the proposed concept for improved future processes and a discussion of the benefits of implementing it. We also included elements identified by OMB as important to e-government business cases—whether an initiative is driven by identified customer needs and whether it contains a strategy for successful collaboration. As shown in figure 2, our analysis of the mini business cases showed that although they addressed some of the required elements, the majority of them did not include some key elements identified by OMB and best practice guidance. All the business cases we reviewed included a discussion of the expected benefits of the proposed initiative, and all but one included a discussion of the initiatives’ objectives and planned future conditions. However, only 9 of the 23 initiatives’ business cases discussed how customer needs were to be identified and addressed, and only 8 addressed collaboration among agencies and other government entities, even though OMB considered these elements fundamental to its e-government strategy. Mr. Chairman, addressing how a proposed project links to the needs of its potential customers is key to the success of that project, and should be discussed in the project’s business case. Without a plan to assess users’ needs, there is a greater risk that the project will focus too heavily on issues that customers do not consider important or disrupt processes that are already working well and accepted by users. In the case of the e- government initiatives, the result could be that the Internet sites and services created might not be useful to those customers they are intended to serve. Collaboration across agencies and other organizations is likewise a key component of most of the initiatives, and therefore a discussion of strategies for collaboration is essential to a complete e-government business case. As the government attempts to integrate services across organizations—particularly in cases where federal agencies overlap in providing similar services to customers—the issue of how agencies collaborate can determine an initiative’s success or failure. To help mitigate the risk of failure, the business case needs to provide a convincing argument that collaboration can be accomplished and a plan for how collaboration will be carried out. Let me point out that the initial “mini” business cases that we reviewed are not the latest ones in existence for the 25 initiatives. More extensive business cases were developed for each of the projects in fall 2002, in conjunction with the fiscal year 2004 budget process. We have not yet had an opportunity to review these documents. OMB required the managing partners of the e-government initiatives to prepare and submit work plans and funding plans in May 2002. We assessed the completeness of these plans, which provided the most up-to- date cost and schedule information available at the time of our review. To conduct our analysis, we identified best practices from GAO and OMB guidance for the effective oversight and implementation of IT projects and compared those best practice elements to the information contained in the May 2002 plans. In addition, several months later, we obtained updated status information from 23 of the initiatives’ project managers. According to the guidance we reviewed, project implementation documents should include components such as cost estimates, a schedule with milestones, identification of project deliverables, and an overall strategy for obtaining needed funding and staff resources. As shown in figure 3, four of the five best practice elements we identified were included in a majority of the project plans. Plans for all but two of the initiatives contained a schedule with milestones, and all the plans identified project deliverables. However, other best practice elements were not included in some of the plans. For example, only 9 identified a strategy for obtaining needed funds, and only 16 contained information about how staffing commitments would be obtained. In addition to the findings shown in figure 3, our analysis of the plans showed uncertainties about milestones for many of the initiatives. Ten of the 24 did not identify a final completion date for the initiatives, resulting in inadequate information to determine whether they were moving forward in a timely manner. Further, 6 of the initiatives were not planned to be completed within the 18 to 24 month time frame originally established by OMB as a criterion for inclusion in its e-government effort. Accurate cost information was also generally lacking. The updated information we obtained from project managers in September 2002 on estimated costs revealed significant changes—changes of more than 30 percent—for about half of the initiatives. These changes, occurring within such a short period of time, rendered the funding plans outdated soon after they were developed. This uncertainty about how much the initiatives would cost, combined with the fact that only 9 of the 24 plans identified a strategy for obtaining these needed funds, led us to conclude that OMB was not receiving adequate information to properly oversee the e-government projects and ensure that they would have the resources to meet their objectives efficiently and economically. Given the challenges we’ve identified, OMB’s oversight role takes on critical importance. Each of the e-government initiatives needs a well- thought-out strategy for directly addressing its biggest challenges, such as getting relevant government agencies to effectively collaborate. And each also needs detailed and stable project plans, so that they can be held accountable for achieving realistic results within budget and according to schedule. Accordingly, in our report, we recommended that OMB take steps as overseer of the e-government initiatives to reduce the risk that the projects would not meet their objectives. Specifically, we recommended that OMB ensure that the managing partners for all the initiatives focus on customers by soliciting input from the public and conducting user needs assessments, work with partner agencies to develop and document effective provide OMB with adequate information to monitor the cost, schedule, and performance of the e-government initiatives. In following up on our recommendations, we requested from OMB updated business cases that were submitted as part of the fiscal year 2004 budget process. These updated business cases should provide not only indications of whether key topics such as collaboration and customer focus are now being addressed, but also updated cost and schedule information. As noted in our report, OMB agreed to provide us this information once it was updated after release of the 2004 budget. However, we have not yet received this information. OMB officials (from the Office of General Counsel and the Office of Information and Regulatory Affairs) stated earlier this week that the business cases still needed to be reviewed before they could be released to us. In summary, e-government offers many opportunities to better serve the public, make government more efficient and effective, and reduce costs. Legislation such as GPEA and the E-Government Act of 2002 have laid a strong foundation for building on these opportunities, and the federal government continues to make strides in taking advantage of them. Overall, few can argue that the 25 e-government projects are not worthy initiatives with commendable objectives. Nevertheless, many critical details remain to be fully addressed before the promise of e-government is fully realized. Because the 25 projects represent such a broad range of activities, it is difficult to gauge their progress collectively. Some of their objectives may be much easier to attain than others. However, our review of the initial planning documents associated with the projects led us to conclude that important aspects—such as collaboration and customer focus—had not been thought out for all the projects, and major uncertainties in funding and milestones were not uncommon. Priority should now be given to ensuring that the agencies managing these initiatives tackle these issues and gain cost and schedule stability so that they can ultimately succeed in achieving their potential. We believe that careful oversight—on the part of OMB as well as the Congress—is crucial to ensuring this success. Mr. Chairman, this concludes my statement. I would be pleased to answer any questions that you or other members of the subcommittee may have at this time. If you should have any questions about this testimony, please contact me at (202) 512-6222 or via E-mail at [email protected]. Other major contributors to this testimony included Shannin Addison, Barbara Collier, Felipe Colón, Jr., John de Ferrari, Neha Harnal, and Elizabeth Roach. Description Provides citizens with a single point of access to a Web- based resource, offering information and access to government recreational sites in a user-friendly format. OMB-reported performance metrics Number of partners sharing data via Recreation.gov (target: 35 partners added) in Recreation.gov (target: 25% increase) Provides a single point of access for citizens to locate and determine potential eligibility for government benefits and services. Hits to site per month (target: 350,000) partner benefit sites (target: 10% increase) benefits and determine eligibility (target: 20 minutes or less) Creates a single point of access for citizens to locate loans. Description Develops and deploys governmentwide citizen customer service using industry best practices that provides citizens with timely, consistent responses about government information and services. OMB-reported performance metrics Average time to respond to inquiries through Firstgov.gov and Federal Citizen Information Center (FCIC) (target: 100% of inquiries responded to within 24 hours) inquiries through Firstgov.gov and FCIC wide inquiries that call center and E-mail systems can handle (target: 3.3M calls per year and 150,000 emails peryear) Customer satisfaction Percentage of coverage of tax filing public (target: minimum of 60%) Creates a single point of access to free on-line preparation and electronic tax filing services. Allows citizens to access and participate in the rulemaking process through a cross- agency front-end Web application. electronically (target: 15% increase) Description Reduces the number of tax- related forms that businesses must file, provides timely and accurate tax information to businesses, increases the availability of electronic tax filing, and models simplified federal and state tax employment laws. Creates a single, one-stop access point for businesses to find and buy government assets. Makes it easy for small and medium enterprises (SME) to obtain the information and documents needed to conduct business abroad. OMB-reported performance metrics Burden reduction for corporations per return, application filed, or both federal government per return filed Employer Identification Number (EIN)—interim EIN granted immediately tax-related transactions (all forms) Description Reduces the burden on businesses by making it easy to find, understand, and comply with relevant laws and regulations at all levels of government. OMB-reported performance metrics Time savings for business compliance and filing (target: 50% reduction) savings through transition to compliance from enforcement through automated processes (target: 25% increase) for issuing permits and licenses permits and licenses (target: within 24 hours) page views (target: 10– 20% increase) Adopts a portfolio of existing health information interoperability standards enabling all agencies in the federal health enterprise to communicate based on common enterprisewide business and information technology architectures. Provides federal and state agencies with a single point of access to map- related data, enabling consolidation of redundant data. Description Creates a single, on-line portal for all federal grant customers to access and apply for grants. Provides federal, state, and local emergency managers on-line access to disaster management- related information and planning and response tools. Departments of Agriculture, Defense, Energy, Housing and Urban Development, Justice, Commerce, Education, Health and Human Services, the Interior, Labor, State, the Treasury, Transportation, and Veterans Affairs; Appalachian Regional Commission, Environmental Protection Agency, Federal Communications Commission, General Services Administration, Interstate Commerce Commission, Office of Personnel Management, Tennessee Valley Authority, U.S. Postal Service, National Aeronautics and Space Administration, Nuclear Regulatory Commission, Small Business Administration, National Oceanic and Atmospheric Administration, Geological Survey (target: reduce by 15%) planning capability (target: improve by 25%) responders using disaster management information system tools (target: increase by 10%) Description Provides interoperable wireless solutions for federal, state, and local public safety organizations and ensures they can communicate and share information as they respond to emergency incidents. Establishes common electronic processes for federal and state agencies to collect, process, analyze, verify and share birth and death record information. Also promotes automating how deaths are registered with the states. Provides a single point of on-line training and strategic human capital development solutions for all federal employees. Time for state to report death to Social Security Administration (target: 15 days) Time to verify birth and death entitlement factors (target: 24 hours) Description Outsources delivery of USAJOBS Federal Employment Information System to deliver state-of- the-art on-line recruitment services to job seekers that include intuitive job searching, on-line resume submission, applicant data mining, and on-line feed-back on status and eligibility. Streamlines and automates the exchange of federal employee human resources information. Replaces official paper employee records. Streamlines and improves the quality of the current security clearance process. Description Consolidates 22 federal payroll systems to simplify and standardize federal human resources/payroll policies and procedures to better integrate payroll, human resources, and finance functions. Provides a common governmentwide end-to-end travel service that rationalizes, automates, and consolidates the travel process in a self-service Web- centric environment, covering all aspects of travel planning, from authorization and reservations to expense reporting and reimbursement. OMB-reported performance metrics Payroll cost per transaction per employee (target: in line with industry averages) disbursements, post payroll interfaces, and periodic reporting trip (target: in line with industry averages) Creates a secure business environment that will facilitate and support cost- effective acquisition of goods and services by agencies, while eliminating inefficiencies in the current acquisition environment. Description Provides policy guidance to help agencies to better manage their electronic records, so that records information can be effectively used to support timely and informed decision making, enhance service delivery, and ensure accountability. Minimizes the burden on businesses, public and government when obtaining services on line by providing a secure infrastructure for on- line transactions, eliminating the need for separate processes for the verification of identity and electronic signatures. Internet Gambling: An Overview of the Issues. GAO-03-89. Washington, D.C.: December 2, 2002. International Electronic Commerce: Definitions and Policy Implications. GAO-02-404. Washington, D.C.: March 1, 2002. Electronic Commerce: Small Business Participation in Selected On-line Procurement Programs. GAO-02-1. Washington, D.C.: October 29, 2001. On-Line Trading: Investor Protections Have Improved but Continued Attention Is Needed. GAO-01-858. Washington, D.C.: July 20, 2001. Internet Pharmacies: Adding Disclosure Requirements Would Aid State and Federal Oversight. GAO-01-69. Washington, D.C.: October 19, 2000. Sales Taxes: Electronic Commerce Growth Presents Challenges; Revenue Losses Are Uncertain. GGD/OCE-00-165. Washington, D.C.: June 30, 2000. Commodity Exchange Act: Issues Related to the Regulation of Electronic Trading Systems. GGD-00-99. Washington, D.C.: May 5, 2000. Trade with the European Union: Recent Trends and Electronic Commerce Issues. GAO/T-NSIAD-00-46. Washington, D.C.: October 13, 1999. Electronic Banking: Enhancing Federal Oversight of Internet Banking Activities. GAO/T-GGD-99-152. Washington, D.C.: August 3, 1999. Electronic Banking: Enhancing Federal Oversight of Internet Banking Activities. GAO/GGD-99-91. Washington, D.C.: July 6, 1999. Securities Fraud: The Internet Poses Challenges to Regulators and Investors. GAO/T-GGD-99-34. Washington, D.C.: March 22, 1999. Retail Payments Issues: Experience with Electronic Check Presentment. GAO/GGD-98-145. Washington, D.C.: July 14, 1998. Identity Fraud: Information on Prevalence, Cost, and Internet Impact is Limited. GAO/GGD-98-100BR. Washington, D.C.: May 1, 1998. Electronic Banking: Experiences Reported by Banks in Implementing On-line Banking. GAO/GGD-98-34. Washington, D.C.: January 15, 1998. IRS’s 2002 Tax Filing Season: Returns and Refunds Processed Smoothly; Quality of Assistance Improved. GAO-03-314. Washington, D.C.: December 20, 2002. Tax Administration: Electronic Filing’s Past and Future Impact on Processing Costs Dependent on Several Factors. GAO-02-205. Washington, D.C.: January 10, 2002. GSA On-Line Procurement Programs Lack Documentation and Reliability Testing. GAO-02-229R. Washington, D.C.: December 21, 2001. U.S. Postal Service: Update on E-Commerce Activities and Privacy Protections. GAO-02-79. Washington, D.C.: December 21, 2001. Computer-Based Patient Records: Better Planning and Oversight By VA, DOD, and IHS Would Enhance Health Data Sharing. GAO-01-459. Washington, D.C.: April 30, 2001. USDA Electronic Filing: Progress Made, But Central Leadership and Comprehensive Implementation Plan Needed. GAO-01-324. Washington, D.C.: February 28, 2001. Information Security: IRS Electronic Filing Systems. GAO-01-306. Washington, D.C.: February 16, 2001. U.S. Postal Service: Postal Activities and Laws Related to Electronic Commerce. GAO/GGD-00-188. Washington, D.C.: September 7, 2000. U.S. Postal Service: Electronic Commerce Activities and Legal Matters. GAO/T-GGD-00-195. Washington, D.C.: September 7, 2000. Defense Management: Electronic Commerce Implementation Strategy Can Be Improved. GAO/NSIAD-00-108. Washington, D.C.: July 18, 2000. Food Stamp Program: Better Use of Electronic Data Could Result in Disqualifying More Recipients Who Traffic Benefits. GAO/RCED-00-61. Washington, D.C.: March 7, 2000. National Archives: The Challenge of Electronic Records Management. GAO/T-GGD-00-24. Washington, D.C.: October 20, 1999. National Archives: Preserving Electronic Records in an Era of Rapidly Changing Technology. GAO/GGD-99-94. Washington, D.C.: July 19, 1999. Labor-Management Reporting and Disclosure: Status of Labor’s Efforts to Develop Electronic Reporting and a Publicly Accessible Database. GAO/HEHS-99-63R. Washington, D.C.: March 16, 1999. Acquisition Reform: NASA’s Internet Service Improves Access to Contracting Information. GAO/NSIAD-99-37. Washington, D.C.: February 9, 1999. Tax Administration: Increasing EFT Usage for Installment Agreements Could Benefit IRS. GAO/GGD-98-112. Washington, D.C.: June 10, 1998. Electronic Government: Selection and Implementation of the Office of Management and Budget’s 24 Initiatives. GAO-03-229. Washington, D.C.: November 22, 2002. Electronic Government: Proposal Addresses Critical Challenges. GAO-02- 1083T. Washington, D.C.: September 18, 2002. Information Management: Update on Implementation of the 1996 Electronic Freedom of Information Act Amendments. GAO-02-493. Washington, D.C.: August 30, 2002. Information Technology: OMB Leadership Critical to Making Needed Enterprise Architecture and E-government Progress. GAO-02-389T. Washington, D.C.: March 21, 2002. Electronic Government: Challenges to Effective Adoption of the Extensible Markup Language. GAO-02-327. Washington, D.C.: April 5, 2002. Information Resources Management: Comprehensive Strategic Plan Needed to Address Mounting Challenges. GAO-02-292. Washington, D.C.: February 22, 2002. Elections: Perspectives on Activities and Challenges Across the Nation. GAO-02-3. Washington, D.C.: October 15, 2001. Electronic Government: Better Information Needed on Agencies’ Implementation of the Government Paperwork Elimination Act. GAO-01- 1100. Washington, D.C.: September 28, 2001. Electronic Government: Challenges Must Be Addressed With Effective Leadership and Management. GAO-01-959T. Washington, D.C.: July 11, 2001. Electronic Government: Selected Agency Plans for Implementing the Government Paperwork Elimination Act. GAO-01-861T. Washington, D.C.: June 21, 2001. Information Management: Electronic Dissemination of Government Publications. GAO-01-428. Washington, D.C.: March 30, 2001. Information Management: Progress in Implementing the 1996 Electronic Freedom of Information Act Amendments. GAO-01-378. Washington, D.C.: March 16, 2001. Regulatory Management: Communication About Technology-Based Innovations Can Be Improved. GAO-01-232. Washington, D.C.: February 12, 2001. Electronic Government: Opportunities and Challenges Facing the FirstGov Web Gateway. GAO-01-87T. Washington, D.C.: October 2, 2000. Electronic Government: Government Paperwork Elimination Act Presents Challenges for Agencies. GAO/AIMD-00-282. Washington, D.C.: September 15, 2000. Internet: Federal Web-based Complaint Handling. GAO/AIMD-00-238R. Washington, D.C.: July 7, 2000. Federal Rulemaking: Agencies’ Use of Information Technology to Facilitate Public Participation. GAO/GGD-00-135R. Washington, D.C.: June 30, 2000. Electronic Government: Federal Initiatives Are Evolving Rapidly But They Face Significant Challenges. GAO/T-AIMD/GGD-00-179. Washington, D.C.: May 22, 2000. Information Technology: Comments on Proposed OMB Guidance for Implementing the Government Paperwork Elimination Act. GAO/AIMD- 99-228R. Washington, D.C.: July 2, 1999. Bank Regulators’ Evaluation of Electronic Signature Systems. GAO-01- 129R. Washington, D.C.: November 8, 2000. Electronic Signature: Sanction of the Department of State’s System. GAO/AIMD-00-227R. Washington, D.C.: July 10, 2000. Internet Management: Limited Progress on Privatization Project Makes Outcome Uncertain. GAO-02-805T. Washington, D.C.: June 12, 2002. Telecommunications: Characteristics and Competitiveness of the Internet Backbone Market. GAO-02-16. Washington, D.C.: October 16, 2001. Telecommunications: Characteristics and Choices of Internet Users. GAO-01-345. Washington, D.C.: February 16, 2001. Telecommunications: Technological and Regulatory Factors Affecting Consumer Choice of Internet Providers. GAO-01-93. Washington, D.C.: October 12, 2000. Department of Commerce: Relationship with the Internet Corporation for Assigned Names and Numbers. GAO/OGC-00-33R. Washington, D.C.: July 7, 2000. Internet Privacy: Implementation of Federal Guidance for Agency Use of “Cookies.” GAO-01-424. Washington, D.C.: April 27, 2001. Record Linkage and Privacy: Issues in Creating New Federal Research and Statistical Information. GAO-01-126SP. Washington, D.C.: April 2001. Internet Privacy: Federal Agency Use of Cookies. GAO-01-147R. Washington, D.C.: October 20, 2000. Internet Privacy: Comparison of Federal Agency Practices with FTC’s Fair Information Principles. GAO-01-113T, Washington, D.C.: October 11, 2000. Internet Privacy: Comparison of Federal Agency Practices with FTC’s Fair Information Principles. GAO/AIMD-00-296R. Washington, D.C.: September 11, 2000.
A key element of the President's Management Agenda is the expansion of electronic government (e-government) to enhance access to information and services, particularly through the Internet. In response, the Office of Management and Budget (OMB) established a task force that selected a strategic set of initiatives to lead this expansion. GAO previously reviewed the completeness of the information used for choosing and overseeing these initiatives, including business cases and funding plans. E-government offers many opportunities to better serve the public, make government more efficient and effective, and reduce costs. To achieve these goals, the 25 e-government initiatives selected by OMB's Quicksilver task force focus on a wide variety of services, aiming to simplify and unify agency work processes and information flows, provide one-stop services to citizens, and enable information to be collected on line once and reused, rather than being collected many times. For example, Recreation One-Stop is a Web portal for a single point of access to information about parks and other federal, state, and local recreation areas. Other initiatives are being pursued that do not necessarily rely on the Internet, such as the e-Payroll initiative to consolidate federal payroll systems. GAO's review of the initial planning documents for the initiatives highlights the critical importance of management and oversight to their success. Important aspects--such as collaboration and customer focus--had not been addressed in early program plans for many of the projects, and major GAO's review of the initial planning documents for the initiatives highlights the critical importance of management and oversight to their success. Important aspects--such as collaboration and customer focus--had not been addressed in early program plans for many of the projects, and major uncertainties in funding and milestones were not uncommon. As shown by GAO's comparison of the content of the initiatives' business cases with best practices, all the business cases included key information, but many elements were missing. In particular, fewer than half addressed collaboration and customer focus, despite the importance of these topics to e-government strategy and goals. Similarly, the accuracy of estimated costs in the funding plans was questionable: between May and September 2002, these estimates for 12 of the initiatives changed significantly--by more than 30 percent. Accurate cost, schedule, and performance information is essential to ensure that projects are on schedule and achieve their goals.
As required by section 11 of the GAO Human Capital Reform Act of 2004 (Pub. L. No. 108-271), GAO is providing its final report not later than 6 years after the date of the Act’s enactment. This report provides, as required by the Act, (1) a summary of the information included in GAO’s annual reports for the fiscal year 2005 through 2009 reporting cycle for sections 2, 3, 4, 6, 7, 9, and 10; (2) recommendations for any legislative changes to sections 2, 3, 4, 6, 7, 9, and 10; and (3) any assessment furnished by the GAO Personnel Appeals Board or any interested groups or associations representing officers and employees of GAO. Table 1 provides a summary of the number of employees separated from the agency under both the agency-wide and exception provisions for voluntary early retirement in fiscal years 2005 through 2009. The voluntary separation incentive provision requires us to make the payment out of current appropriations and to pay an additional amount into the retirement fund, which at a minimum is equal to 45 percent of the basic pay of the employee who is receiving the payment. Thus, the cost of using this flexibility is considerable and, given the many demands on our resources, this provision was not used during the 5-year reporting period. Section 3(a) of the Act authorized the Comptroller General to determine the amount of annual pay adjustments for its officers and employees and described the factors to be considered in making those determinations. This provision amended 31 U.S.C. 732(c)—which required employees’ pay to be adjusted at the same time and to the same extent as the General Schedule. Under section 3(b) the Comptroller General’s authority to establish the annual pay adjustment is also applicable to employees in the Senior Executive Service (SES) and in Senior Level (SL) positions. Under both sections 3(a) and 3(b) an employee must be performing at a satisfactory level in order to receive an annual pay adjustment In January 2006, we issued regulations addressing the satisfactory performance requirement for GAO’s analysts and attorneys. Pursuant to the regulation, GAO analysts and attorneys had to be performing at “Meets Expectations” in all competencies to be considered satisfactory. In addition, most Band IIB and Band III analysts, had to have a performance appraisal that was in the top 50 percent or 80 percent, respectively, of their band and team. In subsequent years this added condition was not required. Since the annual adjustment is a significant component of employees’ annual compensation, limiting its applicability to satisfactory performers is critical to the integrity of GAO’s overall pay for performance system. For calendar years 2006 through 2009, consistent with section 31 U.S.C. 732 (c)(3), the Comptroller General considered various data to determine the amount of GAO’s annual adjustments, including salary planning data reported by the professional services, public administration, and general industry organizations; the General Schedule adjustment; the amount of Performance Based Compensation (PBC) and the appropriate distribution of funds between the annual adjustment and PBC. and GAO’s funding levels. The Comptroller General provided an annual adjustment in 2006 and 2007 of 2.6 percent and 2.4 percent, respectively, to those who were performing at a satisfactory level and who were paid within applicable competitive compensation limits, except for wage-grade employees, and GAO Personnel Appeals Board employees. In addition to the annual adjustment, GAO employees were eligible for PBC based on their performance appraisal ratings. PBC was calculated using a budget factor of 2.15 percent for both 2006 and 2007. Under section 3(b), the Comptroller General is required to consider the statutory criteria set out in section 3(a) in determining an annual increase for members of the GAO SES and SL employees. The Comptroller General considered these criteria and determined that each member performing at a satisfactory level would receive in 2006 and 2007 a 1.9 percent and 1.7 percent increase, respectively—the same increase that was provided to the Executive Schedule for calendar years 2006 and 2007, respectively. In 2007, SES and SL members were also eligible for PBC using a budget factor of 2.25 percent. In 2008, after the Comptroller General made preliminary determinations regarding pay adjustments as had been done in 2006 and 2007, GAO management negotiated with representatives of the newly established GAO Employees Association, International Federation of Professional and Technical Engineers (IFPTE) Local 1921 to reach final agreement regarding salary adjustments. In addition to the annual adjustment, GAO employees were eligible for PBC based on their performance appraisal ratings. Pay adjustments for GAO staff included an annual adjustment of 3.5 percent as well as performance based compensation using a budget factor of 2.75 percent. In 2008, for the first time, GAO implemented a “floor guarantee.” The 2008 floor guarantee provided that if the total increase from the annual adjustment and PBC did not equal at least 4.49 percent of salary, the employee would receive an additional increase to base pay to equal this amount regardless of geographic location. For example, in Washington, D.C., the floor guarantee ensured that all staff received a base pay increase of at least 4.49 percent and was provided without regard to pay range maximums limited only by the GS-15, step 10, statutory maximum rate. In providing the floor guarantee to staff, the additional amount required to bring the base pay adjustment to 4.49 percent of salary was deducted from any PBC bonus. Overall, the average total dollar amount resulting from employees’ annual adjustments, PBC base pay increases and bonuses, and floor guarantees was approximately 6.12 percent of salary. GAO employees participating in one of GAO’s development programs (Professional Development Program, Attorney Development Program, Communication Analysts Pay Process, Program and Technical Development Program, and Administrative Pay Process) received the 3.5 percent annual adjustment, not to exceed the maximum rate of their bands. These employees were not eligible for the floor guarantee because they received additional performance-based salary increases every 6 months for the 2-year duration of the development program. GAO’s SES and SL employees were provided the same 2.5 percent increase authorized for the executive branch. SES and SL members were also eligible for PBC using a budget factor of 2.25 percent. The PBC was provided to the SES and SL staff as a base pay increase not to exceed $169,300. Employees of GAO’s Personnel Appeals Board and student employees are paid according to GS rates, and GAO’s wage grade employees are paid according to the Federal Wage System (FWS) salary rates. These employees received the same percentage across-the-board adjustment on the same effective date as the increases authorized for GS and FWS employees in the executive branch. The pay ranges for these employees incorporated the changes made to the comparable executive branch pay ranges. Prior to the annual adjustment for 2009, the Government Accountability Office Act of 2008, Public Law 110-323, September 22, 2008, was passed. Under section 2 of this Act, the so called “floor guarantee”, as described above, was enacted into law as section 731(j) of title 31, United States Code. For year 2009, following preliminary determinations by the Acting Comptroller General and negotiations between management and IFPTE Local 1921, GAO employees received an annual adjustment equal to the “floor guarantee,” which, for example, equaled 4.78 percent in Washington, D.C. In addition, employees were eligible for performance based compensation using a 2.65 percent budget factor. GAO’s SES and SL employees rated “Fully Successful” were provided a 2.8 percent pay adjustment pursuant to 31 U.S.C. § 733(a)(3)(B) effective January 4, 2009. SES and SL members were also eligible for PBC using a budget factor of 2.65 percent. PBC was provided to the SES and SL staff as a permanent base pay increase not to exceed $174,000. As in 2008, employees of GAO’s Personnel Appeals Board and student employees were paid according to GS rates, and GAO’s wage grade employees are paid according to the Federal Wage System (FWS) salary rates. These employees received the same percentage across-the-board adjustment on the same effective date as the increases authorized for GS and FWS employees in the executive branch. The pay ranges for these employees incorporated the changes made to the comparable executive branch pay ranges. In fiscal years 2005 through 2009, there were no extraordinary economic conditions or budgetary constraints that had a significant impact on the determination of the annual pay adjustments. Section 4 authorizes the Comptroller General to establish pay retention regulations applicable to employees who are placed in lower grades or bands as a result of workforce restructuring, reclassification, or other appropriate circumstances. Table 2 summarizes these data for fiscal years 2005 through 2009. Under section 6, certain key employees with less than 3 years’ service for purposes of leave accrual may be treated as if they had 3 years of federal service. Therefore, they would earn 160 hours on an annual basis instead of 104 hours. These key employees must be occupying positions that are difficult to fill or have unique or unusually high qualifications and would be difficult to recruit without additional incentives. Table 3 shows the number of employees receiving this flexibility in fiscal years 2005 through 2009. Section 7 authorized GAO to establish an Executive Exchange Program. After soliciting and analyzing employees’ comments on draft regulations, we issued the final regulations for GAO’s Executive Exchange Program on May 20, 2005. The authority was not used in fiscal years 2006, 2008, or 2009. However, during fiscal year 2007, this authority was used to bring in two executives from private industry, each for a period of 4 months. At GAO, the executives worked on several special projects related to federal agency audits and agency financial statement issues. In addition to helping revise the GAO/PCIE Financial Audit Manual, they used their experience as auditors of agency financial statements to help develop protocols to help GAO interact with the agency-level auditors (inspectors general as well as public accounting firms) during GAO’s audit of the U.S. government’s consolidated financial statement. This program was considered a success from GAO’s standpoint and it met the expectation of the private industry employer that was involved. The authority expired on July 7, 2009. Section 9 relates to GAO’s performance management system and, among other things, requires a link between the performance management system and the agency’s strategic plan, adequate training on the implementation and operation of the system, and a process for ensuring ongoing performance feedback. Even before the imposition of these requirements, GAO’s performance management system was in conformity with the statutory requirements of section 9. In fiscal years 2005 and 2006, we conducted annual reviews and assessments of our performance management policies and processes and made improvements, when appropriate. During fiscal year 2007, an evaluation of the fiscal year 2006 appraisal and pay cycle was deferred pending the outcome of the then-ongoing union election. In fiscal year 2008, GAO undertook various initiatives to ensure the performance management system met its objectives and provided an even playing field for all employees. In response to continuing differences between African American and Caucasian analyst performance appraisal averages, the Ivy Planning Group conducted an independent assessment of the factors that may influence these differences, and was also tasked with identifying what additional steps GAO could take. A final report was issued on April 25, 2008, which contained over 25 major recommendations. GAO is committed to implementing the Ivy Planning Group’s recommendations and has a number of efforts completed and underway to address the recommendations. In fiscal year 2009, the agency continued to pursue actions designed to ensure that the system met its objectives and was fair and equitable for all employees. GAO established its Management Improvement Priorities Action Plan that includes five areas of concentration: recognizing and valuing diversity; reassessing the performance appraisal system; managing workload, sustaining quality, and streamlining processes; enhancing staffing practices and developing the workforce; and, finally, strengthening recruitment and retention incentives. Projects within these five areas originated from multiple sources, including the Ivy Planning Group's recommendations, CG Special Projects, and suggestions received over time from GAO staff at all levels throughout the agency. These areas also reflect the ongoing efforts of the Office of Opportunity and Inclusiveness, QCI, the Human Capital Office, and the Chief Administrative Office. GAO completed one of the key management improvement projects—a full, systematic, and inclusive review of the performance appraisal system. The objectives of the review were to examine what works, what does not, and what could be done better. Data collected included a comprehensive content analysis of existing data, the results of 28 focus groups of employees, and 53 semistructured interviews with managing directors and a random sample of SES/SL, Band III, and field office managers. In addition, GAO conducted an agencywide, Web-based survey of employees, with an overall survey response rate of 67 percent. Data from all of these sources were synthesized into a final report issued in November 2009 with extensive findings and short- and long-term recommendations for improving GAO’s performance appraisal system. Planning for implementing the recommendations is in progress with over 50 percent of the short-term recommendations already under way. GAO has also established a steering committee composed of managers and employees including representatives from IFPTE Local 1921 to guide the direction of a more extensive contractor review of the current system to address the findings from the systematic review of the appraisal system. GAO continues to provide continuing training on the performance appraisal system and the roles and responsibilities of staff, supervisors, and managers. To ensure that all designated performance managers are knowledgeable about appraisal policies, procedures, and practices, GAO required all raters to take online training prior to preparing fiscal year 2008 ratings. Each subsequent year, all new designated performance managers must take online training. GAO also continues to expand staff, supervisory and managerial training and development to include offerings in how to give and receive feedback. Lastly, during this period, GAO instituted consistent practices across the organization with regard to the review of ratings. Designated performance managers present their preliminary ratings of staff to all Senior Executive Service reviewers. This panel helps to ensure that all raters are consistently applying the rating criteria. Section 10 requires us to consult with any interested groups or associations representing officers and employees of GAO when implementing changes brought about by this Act. Typically, in implementing changes such as those in this Act, we have consulted with interested groups and associations within GAO, provided them with draft policies and regulations, and obtained input from them on suggested clarifications or changes to the policies and regulations. We carefully considered this input and have incorporated it, when appropriate, before distributing policies and regulations for comment to all employees. In 2007, GAO Band I and Band II analysts, auditors, specialists, and investigators, and staff in the Professional Development Program, elected to be represented by a union and established IFPTE, Local 1921. In 2008, GAO and IFPTE, Local 1921, reached an interim collective bargaining agreement. GAO is committed to continuing to work constructively with IFPTE, Local 1921, to finalize and implement a master term collective bargaining agreement. GAO management actively consults with IFPTE Local 1921; the Employee Advisory Council-which is comprised of headquarters and field administrative, professional, and support staff (APSS), as well as Assistant Directors in analyst and analyst-related positions, and attorneys-and the Diversity Advisory Council-comprised of diversity representatives of IFPTE, Local 1921, and employee liaison groups for employees who are disabled, Asian-American, African-American, Hispanic, veterans of the armed forces, people over 40, and advocates for nondiscrimination based on sexual orientation or gender identity-to hear and consider employee needs, concerns, and suggestions as they arise. IFPTE, Local 1921; the Employee Advisory Council; and the Diversity Advisory Council (DAC) are the primary mechanisms for fostering collaboration and open communication between GAO management and staff. GAO provided all employees with the opportunity to comment on draft orders concerning proposed policies and regulations prior to publication in final form. These steps were taken in regard to the promulgation of all policies and regulations implementing the provisions of the Human Capital Reform Act of 2004. The Executive Committee considered all input from Employee Advisory Council and Diversity Advisory Council members and other GAO employees before implementing any changes. Although GAO specifically solicited comments from the PAB, IFPTE Local 1921, the DAC, and the EAC, only the PAB responded to this request with comments. These are included in appendix I. IFPTE Local 1921 informed GAO management that it will provide its input directly to Congress. The flexibilities provided in the GAO Human Capital Reform Act of 2004, along with the human capital flexibilities provided in the 2002 and 2008 Acts, have provided GAO with the ability to attract and retain high caliber employees so that GAO can meet its responsibilities to the Congress and the American people. GAO is making no recommendations for legislative change.
As required by section 11 of the GAO Human Capital Reform Act of 2004 (Pub. L. No. 108-271), GAO is providing its final report not later than 6 years after the date of the Act's enactment. This report provides, as required by the Act, (1) a summary of the information included in GAO's annual reports for the fiscal year 2005 through 2009 reporting cycle for sections 2, 3, 4, 6, 7, 9, and 10; (2) recommendations for any legislative changes to sections 2, 3, 4, 6, 7, 9, and 10; and (3) any assessment furnished by the GAO Personnel Appeals Board or any interested groups or associations representing officers and employees of GAO.
The Telework Enhancement Act of 2010, enacted in December 2010, requires each executive agency to designate a telework managing officer, develop training programs, establish a telework policy, and submit an annual report to the Chair and Vice Chair of the Chief Human Capital Officers Council on the agency’s efforts to promote telework. Under the act, OPM is to play a leading role in helping executive agencies implement the new telework provisions, which include setting telework goals and establishing qualitative and quantitative measures. The law requires OPM to provide policy and guidance for telework in several areas, including pay and leave, agency closure, performance management, official worksite information, recruitment and retention, and accommodations for employees with disabilities. In an annual report to Congress, OPM is to assess each agency’s progress toward goals for participation and other goals relating to telework, such as emergency readiness. The first of these reports under the act is due to Congress in June 2012. Since 2002, OPM has used a telework survey—the data call—to annually collect information from the executive agencies in order to provide Congress with a report on the status of telework across these agencies. OPM conducts this data call to determine the extent to which agency employees are teleworking and to gauge agency progress in various aspects of their telework programs, such as participation, policy, eligibility, cost savings, and technology, as well as to provide examples of barriers agencies face to implementing telework programs. However, throughout the past decade, OPM has been concerned about the reliability of the telework data it receives from executive agencies because, although data reported from agencies have improved, OPM continues to consider it an estimate of telework participation and frequency. In its 2003 and 2007 telework reports to Congress, OPM raised concerns about the ability of agencies to track employee participation in their telework programs. In its 2008 report, OPM identified weaknesses in the methodology most agencies used to collect and report telework participation data and OPM stated that inconsistencies within data systems and inaccuracies triggered by hand- counting telework agreements could affect data reliability. OPM cautions that existing measures of telework participation are a barrier to measuring any increase in telework as the measures vary widely in validity and reliability and limit the capability of any federal body to track the actual level and frequency of telework participation. At the request of Congress, we have previously reported on telework programs across the federal government and have made recommendations related to the reliability of agency-reported data. In a 2005 report, we reviewed the telework data for five federal agencies and found they had reported the total number of employees who were eligible to telework, but had included individuals who were, in fact, excluded from participation based on various criteria such as employee performance, thereby raising concerns about the reliability of the telework data reported by these agencies. In addition, none of the agencies could report the actual number of employees who teleworked or how often they did so because none had fully implemented the capability to track this through their time and attendance systems. Our 2007 testimony reiterated our concern that agencies were measuring employee participation in telework based on their potential to telework rather than their actual usage. More recently, we reported that since the 2004 data call, OPM asked agencies if they had integrated telework into agency emergency and continuity of operations plans, but agencies had no guidance as to what constitutes incorporating telework into continuity and emergency planning. This lack of a definition or description raised concerns about the reliability of reports on this matter. In response to the Telework Enhancement Act of 2010, OPM revised the 2011 data call and provided instructions to executive agency respondents that incorporated common definitions and standards to use in providing OPM with their agency data. The revisions and additions to the 2011 data call were developed in consultation with an Interagency Telework Measurement Group (ITMG), which OPM formed in January 2011. (See app. II for a comparison of definitional and instructional changes from the 2010 and 2011 telework data calls.) The OPM official designated with leading the ITMG said 10 agency officials from 7 agencies were selected for the group because of their knowledge of federal telework programs. According to the OPM official, the ITMG provided expertise in telework program implementation, policy, and methodology development, work/life balance programs, and expertise in research methods, such as surveys. The ITMG met on biweekly from January 2011 until July 2011 and resumed biweekly meetings in September 2011 with the goal of addressing three primary topics: Definitions of key terms, such as telework, eligibility, and employee, to use in the 2011 data call. ITMG interpreted some requirements of the act and developed additional instructions to encourage common reporting methodology across the agencies. For example, according to OPM officials, ITMG clarified the definition of eligibility in light of agencies’ concerns that the act did not specifically define the categories of employees that should be eligible to participate in their agencies’ telework programs, and therefore notified about their eligibility to telework. The group instructed respondents to ensure they excluded military and contract personnel as employees when reporting their telework data. The group also clarified that respondents should include full-time, part-time and intermittent employees when responding to questions about telework participation and frequency. Revision and/or addition of data call questions. For example, OPM officials stated that in collaboration with the ITMG, they clarified the definition of telework to specifically state that telework includes what is generally referred to as remote work, but excludes mobile work and work done on official travel. OPM officials added a new question to capture the number of mobile workers. This addressed a reliability problem from the previous data call when some agencies included mobile work in reporting telework. Revision and development of data collection instruments, in addition to the data call, to collect telework information. For example, ITMG worked to revise telework-related items in the Federal Employee Viewpoint Survey, an OPM data collection instrument that gauges employees’ perceptions of their agency. In the 2011 survey three out of 84 questions focused on telework. In the past, OPM has found this survey to provide complementary employee views on telework. The ITMG also assisted in developing focus groups of telework managing officers and telework coordinators to identify issues, challenges and strategies associated with implementing telework programs at the agency level, such as successful telework implementation strategies, as well as barriers to telework. According to an OPM official, these revisions also included questions that may enable them to better understand the differences between telework programs across executive agencies, including differences in training on telework, use of technology, and how agencies responded to the requirements of the act. This official said such information will help inform the development of telework programs. OPM also made changes to the 2011 data call time period for which employee telework participation and frequency was to be reported. This change was made to allow agencies time to develop telework policies in accordance with the act and to allow OPM time to meet its reporting obligation under the act (see fig. 1). In previous data calls, OPM asked agencies for telework data during the calendar year (12 months), if available. OPM reduced the time period for the 2011 data call to 4 weeks, as it decided this was the best methodology to meet its reporting requirements under the act. Agencies were to select a 4 week period during September and October on which to report. In addition to the change in time period for requested telework data, the 2011 data call asked for more detail on employee telework participation, frequency of employee telework participation, and additional information on telework policy and program implementation, and telework goals as required by the act. OPM officials believe the information they obtained from the 2011 data call will enable the agency to satisfy some of the act’s reporting requirements, but for this report to Congress, OPM cannot fulfill other requirements. OPM officials stated that it is not feasible for OPM to measure agencies’ progress against their Telework Enhancement Act of 2010 goals, since these goals were established in June 2011, and the time period for agencies’ actual participation data was September/October 2011—just 3 to 4 months after agencies established their goals. However according to OPM officials, based on the information collected, OPM will be able to report the percentage and frequency of telework at individual agencies. To communicate changes to the 2011 data call, OPM officials increased their training efforts to aid executive agency officials in developing a common understanding of terms, key concepts, and the objectives of the data call. According to OPM officials, in July 2011, OPM officials responsible for the data call met with agency respondents to provide an introduction and overview to the 2011 data call. The meeting covered the new requirements under the act, and the planned timeframe for agency reporting and OPM processing and analysis of the data collected through the data call. OPM could not require agency officials to attend. Nonetheless, OPM reminded agency officials responsible for the data call, that it was important that they attend both September and October training sessions being offered by OPM. The September training session covered the data call questions and incorporated specific content of the near final data call. The October training session reviewed specific instructions on how to enter information into the online data call form, in addition to reviewing instructions and questions from respondents. While some of the information provided at the two training sessions was similar, each session contained some new information usually in response to questions raised at a previous session. OPM staff also maintained and disseminated via email a list of the most frequently asked questions posed by data call respondents. The act requires OPM to report, among other things, year-to-year executive agency progress on the number of employees who telework. To accomplish this, OPM needs reliable baseline data on telework participation to be able to make year-to-year comparisons. However, OPM officials expressed continuing concerns over the reliability of quantitative participation and frequency data submitted by the agencies through the 2011 telework data call. OPM officials explained that 2011 is a transition year for telework programs across executive agencies, and some agencies made changes to their policies to bring their telework programs into compliance with the requirements of the act. For example, some agencies were implementing new data collection systems while collecting data on telework. These agencies first needed to create and implement new policies, and then consider and establish processes related to the collection of telework data to report to OPM. OPM officials responsible for preparing the report to Congress stated that because of these changes, it would not be appropriate to compare the 2011 data to data collected in prior years. OPM will not be able to make comparisons to prior years due to the previously discussed modifications to the data call, such as changes to definitions. Definitional changes, including the change to specifically exclude mobile workers, could result in agencies interpreting and reporting telework participation and frequency of telework differently than they did in the past. While these changes should improve the consistency of data going forward, according to OPM officials there is no way that OPM can assure that all agencies would provide comparable responses to the same data call questions. Moreover, OPM officials have said that key terms and definitions for the 2012 data call may continue to evolve. If OPM reports agency telework progress based on data collected using definitions revised year to year, OPM may reach erroneous conclusions, although OPM officials have said they are taking steps to try and prevent this possibility. Another reason why comparisons to prior data calls would be invalid is because of the changes OPM made to the time period when agencies reported telework data. Maintaining consistent data series over time necessitates using consistent data collection procedures for ongoing data collections. OPM said that changing the reporting period from 1 calendar year to 4 weeks may result in greater data consistency because in previous data calls, agencies may have reported data from different time periods within a calendar year. Now all agencies will report data from a narrower and more similar time frame. However, there are no available studies to support that the new time period is representative or “typical” of other months in total or of the experience of particular agencies. OPM officials said that they will need to indicate this in their report to Congress. While OPM made changes to the 2011 data call to allow it to meet some of its reporting requirements under the act and better assist agencies in responding to the data call, our analysis found the 2011 data call did not fully meet some generally accepted survey standards. According to these standards, to be valid, survey questions must adequately represent the concept or behavior in question and consistently predict outcomes. Moreover, questions must be designed and asked so that each recipient will understand and answer the same question in the same way. But for the 2011 data call, there can be no assurance that all respondents were aware of associated definitions and instructions provided by the training sessions and the frequently asked questions (FAQ). Although OPM invested more in providing training in preparation for the 2011 data call than in previous data calls and it disseminated training slides to all invitees of its final training session, attendance lists were not recoverable for all training sessions so there can be no assurance that all respondents received training or reviewed the slides. Consequently, some data call respondents may not have been aware of the definitions and instructions provided in the training sessions or in the FAQ. Additionally, some of the information provided in the training sessions was inconsistent. For example, OPM officials said that during the last training session they instructed respondents to report an employee “telework day” if the employee teleworked for any portion of a work day. However, this clarification of “telework day” was not given in either the July overview meeting or in the other training session, and not clearly included in the instructions in the survey instrument or distributed in the FAQ. This information may have been important to agencies reporting on situational telework through automated systems intended to capture more precise data. OPM officials also explained that responses to certain questions should reference the same time period, and this information was not available in the data call instructions. During one training session, an OPM official said she instructed those participants who determined employee participation by counting telework agreements to limit the agreements counted to those in effect after the agency implemented its telework program under the act; however, this information was not available in the online data call instructions. Uncertainty about whether data call respondents attended both the September and October training sessions, and variations in training sessions, could cause agencies to have different understandings of data call concepts and terms. OPM recognizes that the existing measures of federal telework participation vary in validity and reliability, which affects agencies’ ability to report accurate data, and it is taking steps to verify data submitted by respondents to provide a more accurate picture of telework in the federal government based on current definitions and collection methodologies. However, as a result of issues raised above, respondents may have provided inconsistent or inaccurate data on topics required by the act. OPM officials anticipate that telework data will be more reliable next year because of the expected governmentwide implementation of automated data collection based on time and attendance records. As we have reported, OPM has concluded from research that the most reliable telework data are collected through time and attendance tracking systems. Data collected through automated systems eliminate the need to track telework data by counting telework agreements or relying on estimates. Since 2003, OPM has consistently expressed concerns about the methods agencies use to collect telework data. In its previous telework reports to Congress, OPM has advocated for the development of an automated data collection system. OPM officials noted that OPM does not control what telework data responding agencies maintain, or their methods of data collection. Executive agencies provide telework participation and frequency data using a variety of methods, such as relying on estimates, counting telework agreements, and using automated time and attendance records to track telework participation. In an effort to collect more uniform data across agencies, OPM officials are standardizing definitions and data elements for use in automated time and attendance systems. For example, OPM has identified routine telework hours in a pay period as a data element for automated data collection and provided a standardized definition that will be used by all agencies using the Enterprise Human Resources Integration (EHRI) system. OPM has introduced a timeline for modifying the existing EHRI system to allow OPM to collect telework data from executive agencies. According OPM, agencies will begin piloting these automated data collection systems for the 2012 telework data call. OPM began to discuss automated governmentwide data collection with the ITMG in July 2011. According to OPM’s timeline, OPM began to communicate internal requirements for automated telework data collection to telework managing officers in March 2012, but an OPM official has stated they do not expect full automation of telework data until 2013. This official also noted that different agencies have varying abilities to implement this new type of data collection and reporting mechanism, and considering different levels of comfort with new systems, it will take time to adjust to this method. However, continuous improvement efforts sometimes result in a trade-off between the desire for data consistency and a need to improve data collection and maintenance of a consistent data series over time. Because of the eventual planned move to automation, OPM may not be able to use 2011 data as a baseline. With the planned change to the method of data collection, it may not be possible to compare the 2011 data to future data. The 2011 data call requested data for a 2-month period, and some data call respondents relied upon estimates. Planned automation can provide a more uniform and accurate method for collecting telework data, however it may make comparisons using the 2011 telework data as a baseline difficult. However, OPM officials believe the 2011 data will provide an improved report of telework status because of standardized definitions and the more uniform time period of data collection. According to OPM officials, some executive agencies will need time to become comfortable with automated reporting systems and, during a period of transition to a new system, there could be initial reliability issues. However, these officials said that the 2011 data, notwithstanding its limitations, will be useful in identifying and understanding any major agency changes in reported participation that could occur during a transition to automated data collection. Such changes could alert OPM to possible transition related issues in agencies’ conversion to automated governmentwide data call collection efforts in 2012 and 2013. In addition, for those agencies already responding on the basis of time and attendance reporting, major changes in agency responses provide OPM the opportunity to confirm with agencies that the uniform definitions are being consistently applied. The Telework Enhancement Act of 2010 requires OPM to collect telework data and report annually to Congress, which emphasizes the need for telework data to be valid and reliable. OPM revised the 2011 telework data call in order to allow it to meet some of the act’s reporting requirements and assist agencies in responding to the data call. This revision resulted in changes to terminology, as well as changes in the collection time period of requested telework data. OPM provided greater assistance to agencies through training on changes to the 2011 data call to improve accuracy of agency reporting of telework participation and frequency for the data call reporting period of September and October 2011. However, agencies use various methods, which OPM does not control, to report, collect, and maintain telework data, and this could affect the reliability of the telework data submitted. In addition, variation in training sessions and OPM’s uncertainty as to whether all respondents attended training, could lead to respondents’ potential misunderstanding of important terms and instructions. The validity and reliability of the reported 2011 telework data for some of the responding agencies may be questionable, and therefore agency telework participation and frequency data will not likely be comparable with previous data calls because of differences in definitions used, time periods of reporting, and individual agency tracking methods. With the revised 2011 data call, OPM establishes a baseline it could use to conduct a limited crosscheck of data collected through a governmentwide automated telework data collection system, which OPM plans to implement over the course of 2012 and 2013. OPM expects that automated data collection will provide it increasingly more reliable data on which to report progress. However, these efforts to improve future automated data collection may result in changes to agencies’ methods of data collection and a trade-off between the desire for consistency with previous data calls for comparison purposes and a need to improve overall data collection. To improve OPM’s annual reporting of telework to Congress, we recommend that the OPM Director take the following two actions: Ensure that the reliability limitations related to the 2011 telework data call are clearly reported in its June 2012 report to Congress by fully describing how existing measures of telework participation vary widely in validity and reliability and limit the capability of OPM to reliably report the actual level and frequency of telework participation. Continue efforts to improve data collection and gather information that allows for the appropriate qualification of year-to-year comparisons and informs users about the effects of data collection changes going forward. We provided a draft of this report to the Director of OPM for review and comment. The Associate Director of OPM provided written comments, which we have reprinted in appendix III. In summary, OPM partially concurred with our first recommendation and fully concurred with the second. OPM highlighted a number of actions the agency has under way or plans to undertake in response. For the first recommendation, OPM noted that inadequate methods of data collection exist at the agency level and OPM continues to address this data reliability issue through training on evaluation and measurement. While this is an important step in addressing data reliability issues, OPM should ensure that telework data reliability limitations are clearly reported in their annual reports to Congress. For the second recommendation, OPM noted its continued plans to automate collection of telework data, and to regularly meet with telework managing officers and telework coordinators to keep them updated on changes to telework policy and data collection. OPM also provided a number of technical comments, which we incorporated as appropriate. We are sending copies of this report to the Chairman and Ranking Member of the Subcommittee on the Federal Workforce, U.S. Postal Service and Labor Policy, Committee on Oversight and Government Reform, House of Representatives; and the Director of OPM. In addition, this report will be available at no charge on the GAO website at www.gao.gov. If you have any questions about this report, please contact me at 202-512-6806 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributions to this report are listed in appendix IV. This report (1) describes the Office of Personnel Management’s (OPM) actions to respond to the requirements of the Telework Enhancement Act of 2010 and (2) assesses how OPM is handling and addressing identified data reliability issues in the 2011 telework data call. To address these two objectives, we reviewed relevant reports and guidance published by OPM that describe the status of telework programs across executive agencies and previous telework data calls and their instructions. We also reviewed previous GAO reports on telework and the reliability of OPM’s telework data. Lastly, we interviewed the OPM officials responsible for the planning, design, implementation, and analysis of the 2011 telework data call. This included discussions on the role of the Interagency Telework Measurement Group (ITMG), the process for developing the definitions and key terms used in the data call, the training and assistance provided to executive agency officials responsible for completing the data call, and agency plans to address outstanding data reliability issues associated with the data call. We conducted additional analysis to answer selected objectives as described below. To assess the extent to which the 2011 telework data call met generally accepted survey methodology, GAO internal experts in survey research identified principles from the Office of Management and Budget’s (OMB) Standards and Guidelines for Statistical Surveys relevant to assessing the 2011 telework data call. We also used relevant aspects of GAO’s guide to Developing and Using Questionnaires. Using the OMB principles, two analysts independently reviewed the data call, supporting documentation, and clarifying information provided in interviews to assess the extent to which the data call methodology met research practices. The initial rate of agreement across 15 rated practices, including those rated as having insufficient information to judge, was 12 of 15, or 80 percent agreement. The rating for the three practices on which there was initial disagreement were reconciled by the two analysts conducting the review and the reconciled rating was then reviewed by a third analyst with survey expertise. The third analyst did not recommend any changes to the reconciled ratings. We also simulated completion of the Web-based data call, accessing and responding to the data call in the same manner as executive agency respondents. Table 1 outlines the generally accepted survey research principles, derived from OMB’s guidelines, which we used in our assessment. This is not an exhaustive list of all OMB guidelines. When we completed this review in April of 2012, OPM had not yet completed analyzing and reporting on the results of telework data call. Based on this, we could not yet assess whether the data call met all of the OMB principles related to data analysis and reporting. Some OMB principles, such as those related to sample design, were not appropriate to apply to the telework data call. These principles were therefore excluded as not relevant to our review. We conducted this performance audit from June 2011 through April 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. 2010 Telework data call Time period: Based on the agency’s calendar year (Jan. 1 to Dec. 31, 2009). 2011 Telework data call Time period: Based on the months of Sept. and Oct. 2011. Participation: agencies invited to participate in data call. Participation: agencies required to submit telework data to OPM. Telework: “Telework refers to any arrangement in which an employee regularly performs officially assigned duties at home or other work sites geographically convenient to the residence of the employee.” Telework: “Telework is a work arrangement that allows an employee to perform work, during any part of regular, paid hours, at an approved alternative worksite (e.g., home, telework center). This definition of telework includes what is generally referred to as remote work but does not include any part of work done while on official travel or mobile work. See the following clarifications on remote and mobile work as telework.” Employee: Employee refers to federal civilian employees excluding military personnel and contractors. Employee may also include full-time, part-time, and/or intermittent employees. Eligibility: An employee is eligible to participate in telework if all of the following parameters are true: positions require, on a daily basis (every work day), direct handling of secure materials, or on- site activity that cannot possibly be handled remotely or at an alternate worksite; most recent federal government performance rating of record (or its equivalent) is below fully successful or conduct has resulted in disciplinary action within the last year. The employee has not been officially disciplined for being absent without permission for more than 5 days in any calendar year. The employee has not been officially disciplined for violations of subpart G of the Standards of Ethical Conduct for Employees of the Executive Branch. Teleworking does not diminish the employee’s performance or agency operations. Participation and performance complies with the requirements and expectations of his/her telework agreement. The employee’s official duties do not require on a FULL daily basis (ALL DAY, every work day): direct handling of secure materials determined to be inappropriate for telework by the agency head; or on-site activity that cannot be handled remotely or at an alternate worksite. The employee and/or the employee’s position are not disqualified based on additional criteria established by the organization. 2010 Telework data call Types of telework: None provided 2011 Telework data call Types of telework: Routine: telework that occurs as part of an ongoing, regular schedule, and Situational: telework that is approved on a case-by-case basis, where the hours worked were not part of a previously approved, ongoing and regular telework schedule (e.g., telework as a result of special work assignments or doctor appointment.) Situational telework is sometimes also referred to as episodic, intermittent, unscheduled or ad-hoc telework. In addition to the contact named above, William Doherty, Assistant Director, and Keith O’Brien, analyst-in-charge, led the development of this report. Virginia Chanley, Patricia Donahue, Robert Gebhart, Jill Lacey, and Joseph Santiago made significant contributions to this report. Karin Fangman provided legal counsel. Shirley Hwang, Jessica Nierenberg, and Kathleen Padulchick verified the information in the report. Emergency Preparedness: Agencies Need Coordinated Guidance on Incorporating Telework into Emergency and Continuity Planning. GAO-11-628. Washington, D.C.: July 22, 2011. Human Capital: Telework Programs Need Clear Goals and Reliable Data. GAO-08-261T. Washington, D.C.: November 6, 2007. Human Capital: Greater Focus on Results in Telework Programs Needed. GAO-07-1002T. Washington, D.C.: June 12, 2007. Agency Telework Methodologies: Departments of Commerce, Justice, State, the Small Business Administration, and the Securities and Exchange Commission. GAO-05-1055R. Washington, D.C.: September 27, 2005. Human Capital: Key Practices to Increasing Federal Telework. GAO-04-950T. Washington, D.C.: July 8, 2004. Human Capital: Further Guidance, Assistance, and Coordination Can Improve Federal Telework Efforts. GAO-03-679. Washington, D.C.: July 18, 2003.
The Telework Enhancement Act of 2010 requires OPM to report to Congress on the degree of telework participation at executive agencies. To meet this requirement, OPM collects information on agency telework programs through an annual survey, which it refers to as the data call. However, concerns exist about the reliability of these data. GAO was asked to assess OPM’s: (1) actions in response to the requirements of the act and (2) handling of identified data reliability issues in the 2011 telework data call. To address these objectives GAO reviewed its previous reports addressing telework data reliability, and used the Office of Management and Budget’s guidance for federal surveys to review OPM’s (1) plans to collect telework participation data from agencies and (2) development of a data collection instrument. GAO interviewed key OPM officials about its implementation of the 2011 data call. To prepare for its reporting obligations under the Telework Enhancement Act of 2010, the Office of Personnel Management (OPM) assembled the Interagency Telework Measurement Group, consisting of officials from several federal agencies, to assist in revising the telework data call—the survey OPM has used since 2002 to collect telework data from executive agencies. This group standardized key terms such as telework, employee, and eligibility to promote a common reporting methodology among the agencies. The revised telework data call also included changes to the time period for which OPM requested agencies report telework data, and included more extensive training for respondents. Because of changes made to the data call to allow OPM to meet requirements of the act and assist agencies in responding to the data call, OPM officials believe they will be able to provide to Congress an improved report on telework in June 2012. However, these changes also mean that OPM officials will not be able to use participation and frequency data from the 2011 data call to compare to data from previous years and across agencies. OPM officials have noted that this could limit OPM’s ability to report agency progress in its first report to Congress. The ability to compare with previous years is affected by: agencies use of methods of varying reliability to collect telework data, and some agencies made changes to their data collection systems for the 2011 data call. Executive agencies provide telework participation and frequency data by relying on estimates, counting telework agreements, or using automated time and attendance records. modifications to the data call instrument, including changes to terminology and the time period during which telework data was requested. OPM officials said they expect these changes will improve the consistency of data. But if OPM reports progress based on data collected using changing terminology and from different time periods, the agency may reach erroneous conclusions. Participants at the two data call training sessions may not have received the same reporting instructions, and uncertainty about whether all agency respondents attended training, created a risk that some respondents may be unaware of important terms and instructions. While some of the information provided at the two training sessions was similar, each session contained some new information, usually in response to questions raised at a previous session. Future data call improvement efforts could result in a trade-off between the desire for maintenance of a consistent data series over time for comparison with previous data calls and a need to improve data collection. According to OPM, agencies will begin piloting automated telework data collection during 2012 and 2013. OPM expects that this method of data collection will provide it more reliable data than other methods. However, these efforts to standardize methods for tracking telework data may result in changes to agencies’ methods of data collection. The 2011 data call, notwithstanding its limitations, will be useful to help OPM identify and understand major changes in reported participation data that could occur during a transition to automated data collection. GAO recommends that OPM (1) clearly report reliability limitations with the 2011 telework data call in its June 2012 report to Congress and (2) continue efforts to improve data collection and gather information to allow for the appropriate qualification of year-to-year comparisons and inform users about the effects of data collection changes going forward. OPM partially concurred with the first recommendation. However, GAO believes it should report limitations in its annual report. OPM fully concurred with the second. OPM provided a number of technical comments which GAO incorporated as appropriate.
With the changing security environment and the emergence of terrorist coalitions that operate across international borders, the threat of terrorism against U.S. interests and personnel abroad has grown. Over the past decades, and in particular in response to the 1998 embassy bombings in Africa, the State Department has been hardening its official facilities to protect its embassies, consulates, and personnel abroad. However, as State hardened embassies, the American Foreign Service Association (AFSA) raised concerns about the vulnerability of soft targets. According to a State Department travel warning, State considers soft targets to be places, including but not limited to, where Americans and other westerners live, congregate, shop, or visit. This can include hotels, clubs, restaurants, shopping centers, housing compounds, places of worship, schools, or public recreation events. Travel routes of U.S. government employees are also considered soft targets, based on their vulnerability to terrorist attacks. The State Department is responsible for protecting more than 60,000 government employees who work in embassies and consulates abroad in 180 countries. These government officials at approximately 260 posts represent a number of agencies besides State—including the Departments of Agriculture, Defense, Homeland Security, Justice, and the Treasury, the Internal Revenue Service, and the United States Agency for International Development—and all fall under chief of mission authority. State officials indicated that only about one-third of officials at all posts are from the State Department. The responsibilities for the protection of U.S. officials and their families are defined in federal legislation and policies. Under the Omnibus Diplomatic Security and Antiterrorism Act of 1986, the Department of State is given responsibility for the protection of U.S. officials and their families overseas. The act directs the Secretary of State to develop and implement policies and programs, including funding levels and standards, to provide for the security of U.S. government operations of a diplomatic nature and establishes within State the Bureau of Diplomatic Security (DS). The mission of DS is to provide a safe and secure environment for the conduct of U.S. foreign policy. Within DS, there are a number of offices that address and implement security policies and practices to protect facilities and personnel at posts. At posts abroad, the chiefs of mission are responsible for the protection of personnel and accompanying family members at the missions. Additionally, regional security officers (RSOs) administer all aspects of security programs at posts. The RSOs’ responsibilities include providing post officials and their families with security briefings upon their arrival; designing and implementing residential security and local guard programs; liaising and coordinating with the host country law enforcement and U.S. private sector communities to discuss threat issues; and offering security advice and briefings to schools attended by dependents of U.S. government officials. The host nation is responsible for providing protection to diplomatic personnel and missions, as established by the 1961 Vienna Convention on Diplomatic Relations. The convention states the host country should take appropriate steps to protect missions, personnel, and their families, including protecting the consular premises against any intrusion, damage, or disturbances. The Overseas Security Policy Board, which includes representatives from 19 U.S. intelligence, foreign affairs, and other agencies, is responsible for considering, developing, coordinating, and promoting security policies, standards, and agreements on overseas operations, programs, and projects that affect U.S. government agencies under the authority of the chief of mission. This responsibility includes reviewing and issuing uniform guidance for residential security and local guard programs based on threat levels. The Security Environment Threat List, published semiannually by State, reflects the level of threat at all posts in six threat categories, including crime, political violence, and terrorism. Over 50 percent of all posts fall under the terrorism threat ratings of critical or high (see fig. 2). State, in consultation with representatives of the board, develops security standards, based on threat levels, for U.S. missions overseas. When a security-related incident occurs that involves serious injury or loss of life or significant destruction of property at a U.S. government mission abroad, State is required to convene an Accountability Review Board (ARB). ARBs are composed of five individuals, four appointed by the Secretary of State and one by the Director of the Central Intelligence Agency. Members investigate the security incident and issue a report with recommendations to promote and encourage improved security programs and practices. State is required to report to Congress on actions it has taken in response to ARB recommendations. As of March 2005, there have been 11 Accountability Review Boards convened since the board’s establishment in 1986. The Senate Appropriations Subcommittee on Commerce, Justice, State and the Judiciary, in its 2002 and subsequent reports, urged State to formulate a strategy for addressing, in the short and long term, threats to locales abroad that are frequented by U.S. officials and their families. This included providing security enhancements for locations that are affiliated with the United States by virtue of the activities and the individuals they accommodate and therefore might be soft targets. In a number of subsequent reports, the subcommittee has focused its concern about soft targets on schools, residences, places of worship, and other popular gathering places. In fiscal year 2003, a total of $15 million was earmarked for soft target protection, particularly to address security vulnerabilities at overseas schools. Moreover, in fiscal year 2004, Congress earmarked an additional $15 million for soft targets. More recently, the fiscal year 2005 Senate Appropriations Subcommittee report and the subsequent House Conference Report on fiscal year 2005 appropriations further stressed the need to protect these areas. The language in the Senate appropriations report directs State to develop a comprehensive, sustained strategy for addressing the threats posed to soft targets. Specifically, the report language specifies that a strategy should be submitted to the committee no later than June 1, 2005. For fiscal year 2005, Congress earmarked $15 million to secure and protect soft targets, of which $10 million is for security at overseas schools attended by dependents of U.S. government employees. State has a number of programs and activities designed to protect U.S. officials and their families outside of the embassy, including security briefings, protection at schools and residences, and surveillance detection (these programs are discussed in more detail later in this report). Despite these efforts, State has not developed a comprehensive strategy that clearly identifies safety and security requirements and resources needed to protect U.S. officials and their families from terrorist threats outside the embassy. State officials raised a number of challenges related to developing and implementing such a strategy. They indicated they have recently initiated an effort to develop a soft target strategy. As part of this effort, State officials said they will need to address and resolve a number of legal and financial issues. State has not developed a comprehensive soft target strategy to protect U.S. officials and their families from terrorist threats outside the embassy. A comprehensive strategy would focus on protection of U.S. officials and thief families in areas where they congregate, such as schools, residences, places of worship, and other popular gathering spots. However, in a number of meetings, State officials cited several complex issues involved with protecting soft targets and raised concerns about the broader implications of developing such a strategy. DS officials told us that the mission and responsibilities of DS continue to grow and become more complex, and they questioned how far State’s protection of soft targets should extend. They said that providing U.S. government funds to protect U.S. officials and their families at private sector locations or to places of worship was unprecedented and raised a number of legal and financial challenges, including sovereignty and separation of church and state, that have not been resolved by the department. They also told us that specific authorization language would be needed to move beyond a State program that currently focuses on providing security upgrades to schools and off- compound employee association facilities abroad. State officials also indicated they have not yet fully defined the universe of soft targets— including taking an inventory of potentially vulnerable facilities and areas where U.S. officials and their families congregate—that would be necessary to complete a strategy. Although State has not developed a comprehensive soft target strategy, some State officials told us that several existing programs could help protect soft targets. However, they agreed that these existing programs are not tied together in an overall strategy. State officials agreed that they should undertake a formal evaluation of how existing programs can be more effectively integrated as part of a soft target strategy, and whether new programs might be needed to fill any potential gaps. A senior DS official told us that in January 2005, DS formed a working group to discuss and develop a comprehensive soft targets strategy to address the appropriate level of protection of U.S. officials and their families at schools, residences, and other areas outside the embassy. According to the DS official, the strategy should be completed and provided to the Senate Appropriations Committee by June 1, 2005. Investigations into terrorist attacks against U.S. officials found that, among other things, the officials lacked the necessary hands-on training to help counter the attacks. The ARBs recommended that State provide hands-on counterterrorism training to help post officials identify terrorist surveillance and quickly respond to an impending attack. They also recommended State implement an accountability system to reduce complacency about following these procedures. After each investigation, State told Congress it would implement these recommendations, yet we found that State’s hands-on training course is still not mandatory for all personnel going to posts, and procedures to monitor compliance with security requirements have not been fully implemented. According to State, training has been hindered by limitations in funding and training capacities, and implementing new accountability procedures globally is a long-term process. We also found that ambassadors, deputy chiefs of mission, and RSOs were not trained in how to implement embassy procedures intended to protect U.S. officials outside the embassies. Five of the 11 ARB investigations have focused on attacks of U.S. officials on their way to work (see fig. 3): (1) the June 1988 assassination of a post official in Greece, (2) the April 1989 assassination of a post official in the Philippines, (3) the March 1995 assassination of two post officials in Pakistan, (4) the October 2002 assassination of a post official in Jordan, and (5) the October 2003 assassination in Gaza of three post contractors from Israel. Several of these ARBs recommended that State provide better training, indicating that security briefings were not sufficient to identify preoperational surveillance by terrorists, or to escape the attack once under way. In addition, several ARBs found that State lacked monitoring or accountability mechanisms to ensure that U.S. officials complied with personal security measures. For example, a recent ARB recommended that supervisors at all levels monitor their subordinates’ implementation of these countermeasures. Although State agreed with the ARB’s recommendations and reported to Congress that it planned to implement them, many have yet to be fully implemented. For example, State’s hands-on training course, which teaches surveillance detection and counterterrorism driving skills, is still not required and has been taken by relatively few State Department officials and their families. State provided posts with some additional guidance to improve accountability, such as making personal security mandatory and holding managers responsible for the “reasonable” oversight of their staff’s personal security practices, but we found implementation in the field to be incomplete. Furthermore, there are no monitoring mechanisms to determine if post officials are following the new security procedures. State reported to Congress that it agreed with the ARB recommendations to provide counterterrorism training. Specifically, in 1988, it reported that it “agreed with the general thrust of the recommendations” to provide hands- on training and refresher courses. In 1995, State reported that it “re- established the Diplomatic Security Antiterrorism Course (DSAC) for those going to critical-threat posts to teach surveillance detection and avoidance, and defensive and evasive driving techniques.” In 2003, State reported it agreed with the recommendations that employees from all agencies should receive security briefings and indicated that it would use the OSPB to review the adequacy of its training and other personal security measures. State implemented the board’s recommendation to require security briefings for all staff. In December 2003, the OSPB members agreed that predeparture security briefings should be mandatory for all officials planning to work at posts abroad. On March 23, 2004, State notified posts worldwide that, starting June 1, 2004, personal security briefings would be required for all U.S. personnel working at posts. State has required that its officials attend predeparture security briefings, such as Serving Abroad for Families and Employees, since 1987. The briefing covers a variety of post- related issues, including alcoholism, fires, crime, sexual assaults, and terrorist surveillance. Once officials arrive at their posts, they receive country-specific security briefings by the RSO. In addition, RSOs can provide threat-specific security briefings on a case-by-case basis. Family members are strongly encouraged to attend both predeparture and post security briefings. Figure 4 provides additional information on security briefing and training available to U.S. officials and their families. However, few officials or family members working at embassies have taken DSAC. State offers DSAC as an elective to post officials and spouses going to high- and critical-threat posts. State does not track the number of officials who have taken DSAC; thus it is not clear how many officials have received this training. State officials estimate that 10 percent to 15 percent of department officials have taken the course, and this appears consistent with our findings at the five posts we visited. DSAC consists of 2 days of surveillance detection training, 2 days of counterterrorism driving, and 1 day of emergency medical training. During our visits to five posts, we found significant disparities in the levels of security briefings and training of post personnel. We held a variety of round-table discussions at each of the five posts we visited, including with senior and junior State Department officials, non-State officials, and officials from the law enforcement, intelligence, and defense communities. We found that post officials from law enforcement, intelligence, and defense communities had generally received rigorous hands-on training in areas such as surveillance detection, counterterrorism driving, emergency medical procedures, and weapons handling. Officials who had completed DSAC-type training agreed that hands-on training was needed to give people the skills and confidence to identify and respond to terrorist threats. In contrast, relatively few other officials, including those from State, had received DSAC-type counterterrorism training. For example, we found that roughly 10 percent of State Department officials indicated they had taken hands-on training; the figure was even smaller for other employees. Officials gave several reasons for not attending DSAC: they were not aware the course was offered, did not believe they were eligible, or were under pressure to quickly transfer to their new posts. They also told us that the course often conflicted with other training offered by State. Senior DS officials said they recognize that security briefings, like Serving Abroad for Families and Employees, are no longer adequate to protect against current terrorist threats. In response, DS developed a proposal in June 2004 to make DSAC training mandatory. The proposal would provide training, at an estimated cost of about $3.6 million, to about 775 officials, including 95 eligible family members, from all agencies working at critical- threat posts. DS officials said that DSAC training should also be required for all officials, but that issues related to costs, adequacy of training facilities, and the ability to obtain Overseas Security Policy Board agreement were constraining factors. As of April 18, 2005, the proposal had not been approved. Although State has agreed on the need to implement an accountability system to promote compliance with personal security procedures since 1988, there is still no system in place to ensure that post-related personnel are following personal security practices. Despite ARB recommendations to implement accountability mechanisms for personal security, it remains State’s position that security outside the post is primarily a personal responsibility. As a result, there is no way to determine if post officials are following prescribed security guidelines. Beginning in 2003, State has tried to incorporate some limited accountability to promote compliance. However, based on our work at five posts, we found that post officials are not following many of these new procedures. In response to the 2003 ARB, State took a number of steps to improve compliance with State’s personal security procedures for officials outside the embassy, including the following: In June 2003, State revised its annual assessment criteria, known as the core precepts, so that rating and reviewing officials could take personal security into account when preparing performance appraisals. Posts were notified of this new requirement on July 30, 2003. On December 23, 2003, State made a number of revisions to its Foreign Affairs Manual (FAM), such as stating that employees should implement personal security practices. On May 28, 2004, State notified posts worldwide on use of a Personal Security Self-Assessment Checklist. However, none of the posts we visited were even aware of these key policy changes. For example, none of the officials we met with, including ambassadors, DCMs, RSOs, or staff, were aware that the annual ratings process now includes an assessment of whether staff are following the personal security measures or that managers are now responsible for the reasonable oversight of subordinates’ personal security activities. Furthermore, none of the supervisors were aware of the checklist, and we found no one was using the checklists to improve their personal security practices. Furthermore, State’s original plan, to use the checklist as an accountability mechanism, was dropped before it was implemented. In its June 2003 report to Congress on implementation of the 2003 ARB recommendations, State stipulated that staff would be required to use the checklist periodically and that managers would review the checklists to ensure compliance. However, State never implemented this accountability mechanism. According to State officials, they dropped the accountability features out of concern that the review would be too time consuming. We found that State had not issued any guidance on how these new policies and practices should be implemented or monitored. For example, the Foreign Affairs Manual does not specify how managers are to provide for the “reasonable” oversight of their staff’s personal security practices or how to provide for compliance and oversight. As a result, post staff were not sure how these new policies should be implemented. In addition, RSOs lacked guidance on how to promote these new policies. RSOs and supervisors stated that they have no responsibility or authority to monitor post employees for compliance with the new security policies, and the officials we spoke with at five posts said they did not have, and did not want, this responsibility. In discussing our preliminary findings with DS officials, they noted a range of challenges associated with improving security for officials outside the post. State’s primary focus has been, and will continue to be, protecting U.S. officials inside the post since posts are considered higher value targets symbolically and because of the potential for mass casualties. In explaining why posts were not aware of the new personal security regulations, DS officials noted that posts were often overwhelmed by work and may have simply missed the cables and changes in the Foreign Affairs Manual. They also noted that changes like this take time to be implemented globally. Nonetheless, improving security outside the embassy is critical and, according to a number of State officials, improvements in this area must start with the ambassador and the deputy chief of mission. Yet we noted that they, along with the RSOs, were not trained in how best to provide such security before going to post. For example, based on our observations at the training courses and a review of the course material, the ambassador, deputy chief of mission, and RSO training courses did not address how State’s personal security guidelines can be best promoted. The instructors and DS officials agreed that this critical component should be added to their training curriculum. In response to congressional direction and funding, State, in 2003, began developing a multiphase Soft Targets program that provides basic security hardware to protect U.S. officials and their families at schools and some off-compound employee association facilities. However, we found that the scope of the program is not yet fully defined, including the criteria for school selection. In response to direction in both the House Conference report and Senate Appropriations Subcommittee report, State addressed the issue of providing security enhancements to overseas schools attended by dependents of U.S. officials and American citizens. In 2003, State began developing a plan, known as the Soft Targets program, to expand security for overseas schools to protect against terrorism. Specifically, State’s Office of Overseas Schools, Overseas Buildings Operations, and DS have been working together on the program. The program has four proposed phases. The first two phases focused on department-sponsored schools that have previously received grant funding from the State Department. In phase one of the program, department-sponsored schools were offered funding for basic security hardware such as shatter-resistant window film, two-way radios for communication between the school and the embassy or consulate, and public address systems (see fig. 7). As of November 19, 2004, 189 department-sponsored schools had received $10.5 million in funding for security equipment in phase one of the program. The second phase of the program addresses any additional security enhancements that department-sponsored schools could benefit from and takes into consideration the local threat level, the nature of the vulnerability and measures required to correct the deficiency, and the percentage of U.S. government dependent students in the school. Schools have requested funding for security enhancements such as perimeter fencing, walls, lighting, gates, and guard booths (see fig. 8). As of November, 2004, State has obligated over $15 million in funding for department-sponsored schools for phase two security upgrades. Phase three of the program plans to address security enhancement needs of nondepartment-sponsored schools overseas attended by dependents of U.S. government officials or U.S. citizens. This phase provides funding for phase one enhancements such as the shatter-resistant window film, radios, and public address systems. State plans to implement the fourth phase of the Soft Targets program to include phase two enhancements for nondepartment-sponsored schools overseas that qualify. Within the Soft Targets program, State also has focused on enhancing the security for embassy and consulate employee associations that have facilities off-compound, such as recreation centers. The Bureau of Overseas Buildings Operations has been collecting data on the security needs of these facilities to determine the type of security equipment or upgrades that would be most beneficial. The facilities, working with the RSO at post, have been asked to identify physical security vulnerabilities that could be exploited by terrorists. As of September 2004, 24 of the 34 posts with off-compound employee association facilities had requested a total of $1.3 million in security upgrades, which includes funding for perimeter walls and shatter-resistant window film. In fiscal year 2004, almost $1 million was obligated by State for security enhancements at off- compound employee association facilities. RSOs said that identifying and providing funding for security enhancements at department-sponsored schools for phase one and phase two security enhancements were straightforward because of the pre- existing relationship with these schools. However, they said it has been difficult to identify nondepartment-sponsored schools for phase three of the program. Some RSOs told us they were not sure about the criteria for approaching nondepartment-sponsored schools in phase three and were seeking guidance from headquarters on this issue. For example, some RSOs were not sure what the minimum number of American students attending a school needed to be for the school to be eligible to receive grant money for security upgrades. Some RSOs at the posts we visited were considering offering funding to schools with as few as one to five American students. Moreover, one RSO was seeking guidance on what constitutes a school and questioned whether informal facilities attended by children of U.S. missionaries could qualify for the program. State officials told us they sent cables to posts in the summer of 2004 with more detailed information on school selection. They explained that they have asked RSOs to gather data on nondepartment-sponsored schools attended by American students, particularly U.S. government dependents. State officials from DS and the Bureau of Overseas Buildings Operations (OBO) acknowledged that the process of gathering data has been difficult since there are hundreds of such schools worldwide. According to an OBO official, as of December 2004, only about 81 out of the more than 250 posts have provided responses regarding such schools. OBO officials stated they will use the data to develop criteria for which schools might be eligible for funding under phase three and, eventually, phase four of the program. In anticipation of any future phases of the Soft Targets program, OBO officials further explained they have also asked RSOs to identify other facilities and areas that Americans frequent, beyond schools and off-compound employee association facilities, that may be vulnerable against a terrorist attack. State’s primary program in place to protect U.S. officials and their families at their residences, the Residential Security program, is principally designed to deter crime, not terrorism. The program includes basic security hardware and guard service; as the threat increases, the hardware and guard services can be correspondingly increased at the residences. State officials said that while the Residential Security program, augmented by the local guard program, provides effective deterrence against crime, it could provide limited or no deterrence against a terrorist attack. To provide greater protection against terrorist attacks, some posts we visited used surveillance detection teams in residential areas, despite guidance that limits their use primarily to the embassy. State has a responsibility for providing a secure housing environment for U.S. officials and their families overseas. Housing options could include single-family dwellings, apartments, and compound and clustered housing. Each post is responsible for designing and implementing its Residential Security program based on factors that include host country law enforcement capabilities, the post-specific threat environment, and available funding. The Residential Security program includes basic security hardware, such as alarms, shatter-resistant window film, access control measures, and local guards. As the threat increases, hardware and guard services can be correspondingly increased at the residences. The standards used to determine the minimum acceptable level of residential security protection are guided by threat ratings established in the Security Environment Threat List. For the Residential Security program, DS uses the standards for the threat rating categories of political violence and crime, though not for terrorism. Standards for residential security also differ depending on the types of residences. Security at the residences can be augmented by the use of local guards. Local guard functions vary by threat ratings for crime and political violence and by the type of residence protected. The local guard program for residential security may include mobile patrols, quick reaction forces, and stationary guards. Figure 9 provides an illustration of a stationary guard at a residence. The mobile patrols are assigned responsibility for visiting residences periodically, and respond to alarms at residences or when emergencies arise. All posts we visited utilized local guards for some aspect of residential security; some posts, due to the higher threat levels, had more comprehensive local guard coverage than others. For example, all posts we visited had mobile patrols for residential neighborhoods, while only two posts had stationary guards at residential housing. Moreover, some posts with mostly apartment housing had a guard or doorman stationed at the entrance of the building to provide a first line of security, primarily against crime. Post officials, including RSOs, told us that the Residential Security program provides effective deterrence against crime and could provide some deterrence against a terrorist attack, though State officials felt it could provide little or no deterrence against a terrorist attack. State and post officials indicated that the biggest concern at residences, when considering the type of security to implement, has been the threat from crime. However, as the threat environment has changed and terrorists have changed tactics from kidnapping to detonating car bombs outside of residences, some posts have changed their housing profile. Some posts we visited limited the number of U.S. officials living in specific apartments or neighborhoods to minimize the risk and consequences of a residential terrorist attack. For example, post management at two of the posts we visited have decided to limit the number of Americans in apartment housing to 25 percent of the entire building population to minimize the impact of a car bomb detonated outside residential housing. Some senior DS officials told us that the best residential scenario for posts is to have a variety of housing options, including apartments and single-family homes. By having a mix of housing options, post officials are dispersed, reducing the number of potential targets. To provide greater protection against terrorist attacks, most posts we visited used surveillance detection teams in the residential areas. The Surveillance Detection program was implemented in response to the U.S. Embassy bombings in Nairobi, Kenya, and Dar es Salaam, Tanzania. The mission of the program is to enhance the ability of all posts to detect preoperational terrorist surveillance directed against primary diplomatic facilities, such as the embassy. According to State’s Surveillance Detection Operations Field Guide and the Foreign Affairs Handbook, surveillance detection units can be used to cover other facilities, such as off-compound employee association facilities and residences, only if there is a specific threat directed against such areas. In addition, surveillance detection can be used to cover large official functions. At many of the posts we visited, the RSOs were routinely utilizing surveillance detection units to cover areas outside key embassy facilities, such as residences, school bus stops and routes, and schools where U.S. embassy dependents attend. RSOs told us that the Surveillance Detection program is instrumental in providing deterrence against potential terrorist attacks. Furthermore, some RSOs told us that the use of surveillance detection at school bus stops and outside schools provides a sense of comfort for post officials and their spouses who have dependents in international or American schools. During our post visits, some RSOs argued that the current program guidelines are too restrictive and that State should allow flexibility in using surveillance detection for areas outside the embassy deemed appropriate by the RSO. Senior State officials told us, while the use of the surveillance detection in soft target areas could be beneficial, the program is labor intensive and expensive and any expansion of the program could require significant funding. The State Department is responsible for protecting more then 60,000 employees and their families who work overseas. Recent terrorist attacks and threats have heightened demands that State provide adequate safety and security outside embassy compounds. We found that State has not yet developed a strategy addressing the appropriate level of protection needed for schools, places of worship, and private sector recreation facilities where employees and families tend to congregate. State officials are concerned about the feasibility and costs associated with providing protection for these “soft targets.” Prior investigations into attacks against U.S. officials have resulted in recommendations that State implement improvements to protect U.S. officials against terrorist attacks. However, our analysis indicated that State has not fully implemented several of these recommendations related to training and accountability mechanisms designed to improve personal safety. Overall, we believe State should develop a strategy to protect U.S. officials and their families, and as part of this effort, undertake an assessment of the level of protection to be afforded to officials and their families while commuting, and at residences, schools, and other community-based facilities. We also believe that State should provide adequate counterterrorism training and fully implement its accountability mechanisms to afford greater awareness and implementation of security safeguards for U.S. officials and their family members while outside the embassy compounds. We recommend that the Secretary of State, working with the Overseas Security Policy Board, take the following 11 actions: Include in the current development of a comprehensive soft target strategy information that (1) determines the extent of State’s responsibilities for providing security to U.S. officials and their families outside the embassy; (2) addresses the legal and financial ramifications of funding security improvements to schools, places of worship, and the private sector; (3) develops programs and activities with FAM standards and guidelines to provide protection for those areas for which State is deemed responsible for; and (4) integrates into the embassy emergency action plan elements of the soft targets program. Mandate counterterrorism training and prioritize which posts, officials, and family members should receive counterterrorism training first; track attendance to determine compliance with this new training requirement; and add a “soft target protection” training module to the ambassadorial, deputy chief of mission, and RSO training to promote the security of U.S. officials and their families outside the embassy. Fully implement the personal security accountability system that State agreed to implement in response to the 2003 ARB for all embassy officials, and develop related accountability standards for the Foreign Affairs Manual that can be used to monitor compliance. The Department of State provided written comments on a draft of this report (see app. II). State generally agreed with most of our report recommendations and said it would examine the others. Specifically, State agreed to incorporate a soft target training module into RSO training, and stated that the department would ensure that similar training be developed and added to the ambassadorial and deputy chief of mission training to promote the security of U.S. officials and their families outside the embassy. The department also agreed to track attendance with the counterterrorism training course if it becomes a requirement, and noted that, as of March 2005, all diplomatic security courses are now tracked for enrollment and attendance. With regard to the recommendation to fully implement the personal security accountability system, State agreed to reiterate, through additional notifications and guidance, the accountability requirements and other tools available to improve personal security. Regarding our recommendation that State develop a comprehensive strategy, State indicated that it was prepared to examine, in conjunction with the OSBP, the contents and recommendations of the report as they relate to their security programs, but did not indicate whether they would incorporate any of the specific elements of the recommendations into its new soft targets security strategy. State expressed concern that our draft report mischaracterized the department’s responsibility to protect Americans living abroad, and implied that State was responsible for providing these Americans a similar level of protection provided to diplomats and their families. We have clarified the scope and methodology and text of the report to focus on State’s roles and responsibilities to protect U.S. diplomats and their families, and have deleted references to how State provides safety and security support to U.S. citizens abroad. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 1 day from the report date. At that time, we will send copies of this report to interested congressional committees and to the Secretary of State. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4268 or at [email protected]. Another contact and staff acknowledgments are listed in appendix III. To determine how the State Department protects U.S. officials and their families while outside the embassy, we reviewed State documents and conducted interviews with State officials in Washington, D.C. In addition, we reviewed documents, conducted interviews, and held roundtable discussions with State and other agency officials at four U.S. embassies and one consulate overseas. In Washington, D.C., we reviewed the Diplomatic Security sections of State’s Foreign Affairs Manual and Foreign Affairs Handbook and read numerous State cables pertaining to personal security and other security practices. In addition, we reviewed eight Accountability Review Board (ARB) reports and State’s responses to Congress based on these ARBs, and met with the Chairman of the Amman, Jordan ARB. We interviewed officials from a number of State bureaus and offices. We met with officials from State’s Bureau of Diplomatic Security (DS), including officials from the Office of International Programs, Office of Facility Protection Operations, Office of Physical Security Programs, Office of Intelligence and Threat Analysis, Office of Regional Directors, Office of Countermeasures, and DS Training. We also met with officials from State’s Bureau of Overseas Buildings Operations, Office of Management Policy, Office of Overseas Schools, Office of Commissary and Recreation Affairs, Foreign Service Institute, and Office of the Inspector General. Moreover, we met with representatives of the Overseas Security Policy Board. To better understand the support for the Soft Targets program, we met with executive members of the American Foreign Service Association and also reviewed a number of congressional reports that mention the protection of soft targets. To obtain firsthand experience of security and antiterrorism training available to State and non-State personnel, we attended a number of training courses and briefings. We attended the 2-day Security Overseas Seminar, the 5-day Diplomatic Security Antiterrorism Course, and Regional Security Officer security in-briefings at posts we visited. We also attended sections of the Ambassadorial Seminar and the Regional Security Officer Training to better understand how the issue of protecting U.S. officials and their families outside the embassy was addressed. We conducted fieldwork at five posts—four embassies and one consulate— in four countries. We chose the posts based on a number of factors, including variety in post size and post terrorism threat levels. At each of the posts, we generally met with the Ambassador, the Deputy Chief of Mission, DS and other State officials, and post officials representing other U.S. government agencies, including personnel from the law enforcement, intelligence, and defense communities. We also held roundtables, at all posts, with State and non-State officials as well as spouses of post officials, to obtain information on their security awareness and training. At most of the posts we visited, we met with representatives of the post’s Emergency Action Committee and the host nation police. In addition, we met with representatives of the Overseas Security Advisory Council at some posts. To better understand the Soft Targets program, we met with school officials at American or international schools in each country. Finally, we observed residential security measures at post housing at each post we visited. To assess the reliability of the funding data for the Soft Targets Program, we asked State officials to respond to a standard set of data reliability questions. Based on their responses and follow up discussions, we determined that the data used in the report for Soft Targets funding is sufficiently reliable for the purposes of this report. Our focus on soft target protection pertains primarily to U.S. government officials and their families and other post personnel who fall under chief of mission authority and not to the entire American community abroad. To limit the scope of our review, we did not assess the security advice or assistance provided through the Overseas Security Advisory Council, the Antiterrorism Assistance Program, the consular warden system, or evacuations. We also did not undertake a comprehensive review of residential housing to determine which residential option provides the most effective deterrent against terrorist attacks. We conducted our work from March 2004 through February 2005 in accordance with generally accepted government auditing standards. The following are GAO’s comments on State’s letter dated April 18, 2005. 1. We agree that State does not have an official definition of soft targets and modified the text, where appropriate, to make this clear. Given this absence, we relied upon a State Department travel warning that included the specific language used in the draft report. 2. State indicated that, had we used a narrower definition of soft targets, it could have dramatically changed the conclusions of our work. We disagree. Our report focuses on State Department efforts to protect U.S. officials and their families from terrorist threats, at their homes, recreation centers, schools, commuting, and living outside the embassy compounds. 3. Although State, in its comments, indicated that it has long had a “security strategy” to protect U.S. officials and their families outside the embassy, it was never able to produce such a document. In addition, while State has a number of programs and activities designed to protect U.S. officials and their families at soft target areas, senior DS officials agreed that these programs are not tied together in an overall strategy. In January 2005, State agreed that it should develop a comprehensive soft target strategy, and as part of that effort, undertake a formal evaluation of how existing programs can be more effectively integrated and whether new programs might be needed to fill any potential gaps. State said it planned to complete the strategy by June 1, 2005. 4. We have taken out reference to “other Americans” throughout the report, except in reference to the Soft Targets Program, which covers U.S. children and teachers who have no affiliation with the U.S. government. We have also modified the scope and methodology to show that our focus is “primarily” on the protection of U.S. government officials and their families. 5. We have clarified the sentence by indicating that RSOs were unclear about which schools could qualify for security assistance under phase three of the Soft Targets Program. Phase three, because it can encompass all schools in a country with one or more Americans, can potentially include vastly more schools than in phase one or two of the program. We recognize that the department’s Soft Targets Working Group is currently defining parameters for which schools could qualify under phase three, in addition to identifying other vulnerable off- compound facilities. We believe that a soft target strategy could help identify which schools most urgently need security improvements. 6. We clarified the report to stipulate that these reports focused on the security of U.S. officials. 7. See GAO comment 1. We have also changed the word “defines” to “considers.” 8. It is not uncommon for GAO to clarify, add specificity and thus make adjustments or changes to a requested engagement, provided that these adjustments and changes are discussed and agreed upon by the requester. We informed State of these changes. 9. See GAO comment 4. 10. See GAO comment 1. 11. See GAO comment 4. 12. The appropriations subcommittee report language is within the scope of the GAO review because it covers U.S. officials and their dependents, which is the primary focus of our review. Moreover, this language was based on testimony provided by AFSA out of concern that the department was not providing adequate security for U.S. diplomats and their families while they are outside of the embassy compound. GAO agrees that the subcommittee report language is not binding and we are not judging the department’s performance against this language. However, we agree with the subcommittee, as State has, that State should develop a comprehensive soft targets strategy. 13. In our draft, we noted that the officials were attacked on their way to work, either in their driveway or as they drove to a work site. The Gaza attack occurred while the officials were on their way to the work site. 14. See GAO comment 4. We have incorporated technical comments in the report where appropriate. In addition to the above named individuals, Edward George and Andrea Miller made key contributions to this report. Joe Carney, Martin De Alteriis, Etana Finkler, Ernie Jackson, Elizabeth Singer, and Michael Derr provided technical contributions.
U.S. government officials working overseas are at risk from terrorist threats. Since 1968, 32 embassy officials have been attacked--23 fatally--by terrorists outside the embassy. As the State Department continues to improve security at U.S. embassies, terrorist groups are likely to focus on "soft" targets--such as homes, schools, and places of worship. GAO was asked to determine whether State has a strategy for soft target protection; assess State's efforts to protect U.S. officials and their families while traveling to and from work; assess State's efforts overseas to improve security at schools attended by the children of U.S. officials; and describe issues related to protection at their residences. State has a number of programs and activities designed to protect U.S. officials and their families outside the embassy, including security briefings, protection at schools and residences, and surveillance detection. However, State has not developed a comprehensive strategy that clearly identifies safety and security requirements and resources needed to protect U.S. officials and their families abroad from terrorist threats outside the embassy. State officials raised a number of challenges related to developing and implementing such a strategy. They also indicated that they have recently initiated an effort to develop a soft targets strategy. As part of this effort, State officials said they will need to address and resolve a number of legal and financial issues. Three State initiated investigations into terrorist attacks against U.S. officials outside of embassies found that the officials lacked the necessary hands-on training to help counter the attack. The investigations recommended that State provide hands-on counterterrorism training and implement accountability measures to ensure compliance with personal security procedures. After each of these investigations, State reported to Congress that it planned to implement the recommendations, yet we found that State's hands-on training course is not required, the accountability procedures have not been effectively implemented, and key embassy officials are not trained to implement State's counterterrorism procedures. State instituted a program in 2003 to improve security at schools, but its scope has not yet been fully determined. In fiscal years 2003 and 2004, Congress earmarked $29.8 million for State to address security vulnerabilities against soft targets, particularly at overseas schools. The multiphase program provides basic security hardware to protect U.S. officials and their families at schools and some off-compound employee association facilities from terrorist threats. However, during our visits to posts, regional security officers were unclear about which schools could qualify for security assistance under phase three of the program. State's program to protect U.S. officials and their families at their residences is primarily designed to deter crime, not terrorism. The Residential Security program includes basic security hardware and local guards, which State officials said provide effective deterrence against crime, though only limited deterrence against a terrorist attack. To minimize the risk and consequences of a residential terrorist attack, some posts we visited limited the number of U.S. officials living in specific apartment buildings. To provide greater protection against terrorist attacks, some posts we visited used surveillance detection teams in residential areas.
NASA’s programs encompass a broad range of complex and technical activities—from investigating the composition and resources of Mars to providing satellite and aircraft observations of Earth for scientific and weather forecasting. NASA currently funds more than 100 programs and projects in various phases of execution in 7 strategic Enterprises: Space Science, Earth Science, Biological and Physical Research, Aeronautics, Space Flight, Education, and Exploration Systems. Two NASA offices have key responsibilities in ensuring the effective execution of these programs: the Office of the Chief Financial Officer, which is responsible for providing oversight and financial management of agency resources and establishing related policy guidance, and the Office of Chief Engineer, which is responsible for ensuring development efforts and mission operations are planned and conducted using sound engineering practices. More than two-thirds of NASA’s work force is made up of contractors and grantees, and 90 percent—or roughly $13 billion—of NASA’s annual budget is spent on work performed by its contractors. Since 1990, we have identified NASA’s contract management as a high-risk area. This assessment has been based in part on our repeated finding that NASA does not have good cost-estimating processes or the financial information needed to develop good cost estimates for its programs, making it difficult for NASA to oversee its contracts and control costs. For example, in July 2002, we reported that an independent task force convened to assess the management of the International Space Station concluded that the program’s fiscal year 2002 through fiscal 2006 budget was not credible because of weaknesses in its cost-estimating processes. The task force pointed out that these problems occurred because NASA had not instituted or had ignored many of the program’s control and contract oversight procedures—such as preparing a full life-cycle cost estimate—that should have alerted the agency to the growing cost problem and the need for mitigating actions. According to the cost analysis team that supported the task force, NASA’s focus on staying within annual budgets instead of managing total program costs was perhaps the single greatest factor in the program’s cost growth. NASA’s unreliable cost estimates have significant implications for potential future endeavors, such as those outlined by the President in January of this year. Specifically, the President called for a shift in NASA’s long-term focus, envisioning that NASA will retire the shuttle program as soon as assembly of the International Space Station is completed, planned for the end of the decade; develop a new crew exploration vehicle as well as launch human missions to the moon between 2015 and 2020, and build a permanent lunar base as a stepping stone for more ambitious missions. To achieve these goals, the President proposed spending $12 billion over the next 5 years— about $1 billion of which would come from an increase in NASA’s budget, currently $15.4 billion—with the remaining $11 billion being reallocated from existing NASA programs. Developing reliable cost estimates has been difficult for agencies across the federal government. The need for reliable cost estimates is at the heart of two of the five-governmentwide initiatives in the 2002 President’s Management Agenda (PMA); the two are “improved financial performance” and “budget and performance integration.” These initiatives are aimed at ensuring that federal financial systems produce accurate and timely information to support operating, budget, and policy decisions and that budgets are performance-based. As part of these initiatives, the President calls for changes to the budget process to better measure the real cost and performance of programs. According to the PMA, accomplishing all of the crosscutting initiatives will matter little without the integration of agency budgets with performance. As of April 2003, the baseline development cost estimates for the programs we reviewed varied considerably from the programs’ initial baseline estimates. More than half of the programs’ development cost estimates increased, and for some programs, this increase was significant. The baseline development cost estimates for each of the 10 programs we reviewed in detail were rebaselined—that is, recalculated to reflect new costs, time frames, or resources associated with program changes in program objectives, deliverables, or scope and plans. Although NASA provided specific reasons for the increased cost estimates and rebaselinings—such as delays in the development or delivery of key system components and funding shortages—it does not have guidance for determining when rebaselinings are justified. Such criteria are important to instilling discipline in the cost-estimating process. Most of the 27 programs we reviewed experienced a change in their development costs estimates. While 8 of the 27 programs experienced slight decreases in their development cost estimates, 17 experienced cost growth—as much as almost 94 percent. The remaining two programs had no change. Ten of the 17 programs’ cost growth was greater than 25 percent. Table 1 shows the development cost estimate changes from the initial baseline to the baseline as of April 2003 and the life-cycle cost estimate for each of the 27 programs. The 10 programs that we reviewed in detail are shaded and italicized. (See app. II for assessments of the 10 programs and app. III for descriptions of the remaining 17 programs.) The development cost estimates for each of the 10 programs that we reviewed in detail have been rebaselined—for some programs, as many as four times—and for 7 of the 10 programs, the cost estimate increased each time it was rebaselined (see fig. 1). For the 10 programs we reviewed in detail, NASA cited specific reasons for changes in the baseline development cost estimates and the recalculated baselines—many of which were related to technical problems and subsequent delays in the development or delivery of key system components, and insufficient funding and reserves, as illustrated in the following examples: Technical problems in the MERs program required a significant redesign of components and the development of a new landing system. Two of MERs’ three rebaselinings were also the result of inadequate reserves. According to NASA officials, without the rebaselinings, the development cost “to go” would have drained the program’s reserves. The increase in CLCS’s development cost estimate and rebaselining was the result of poorly defined requirements and design, software integration problems, and fundamental changes in the project’s management structure and contractors’ approach to the work. The project, which experienced an almost 94 percent increase in its baseline development cost estimate, was ultimately terminated. The GP-B program—which was rebaselined four times—experienced significant schedule slippages due to repeated technical problems, including failures in the probe’s heat exchanger, the need for additional testing, payload electronics delays, and thermal vacuum test failures. Schedule slippages in the SIRTF program—which contributed to increases in the program’s baseline development cost estimate and four rebaselinings of the estimate—were caused by delays in the delivery of components, flight software, and the mission operation system as well as launch delays that resulted from a handling accident involving a global positioning system payload and concerns of delamination on the launch vehicle’s solid rocket motors. Changes in development cost estimates for the CAU program were primarily the result of the program’s expanded scope, which occurred in October 2002, to produce modification kits that would allow the CAU upgrade to be installed into the orbiters. The Hyper-X program experienced three rebaselinings, and according to the project manager, the program will be rebaselined again in the near future. The rebaselinings were due to schedule slippages resulting from the need to fund an investigation of the problems experienced in the first Mach 7 flight vehicle—which was destroyed in flight—and related corrective actions to the second Mach 7 flight. Revised contract requirements, funding changes, or the realization that program goals are not achievable may require a formal rebaselining. However, NASA has not defined or provided guidance or restrictions on rebaselining to ensure that programs consistently and appropriately apply rebaselinings and do not adjust their baseline cost estimates whenever the estimates become unmanageable. Further, NASA lacks a process for systematically identifying and assessing programs that are not achieving their cost, schedule, and performance goals. Such a process has been employed by the Department of Defense (DOD), which also relies heavily on contractors to deliver complex, cutting-edge technologies to meet its mission. Specifically, DOD must report to the Congress programs that incur a cost growth of 15 percent or more in the program baseline. Moreover, DOD must justify the continuation of acquisition programs that incur a cost growth of 25 percent or more in the program baseline by certifying that specific criteria have been met—including that the new cost estimates are reasonable. Under such a process, 5 of the 10 programs that we reviewed in detail would have been required to report to the Congress, and 4 of the 5 programs would have had to certify that their new cost estimates were reasonable. NASA has yet to implement a well-defined process for estimating the cost of its programs—a weakness we and NASA’s Inspector General have repeatedly reported. Recognizing the need for such a process, NASA developed a cost-estimating handbook in 2002—the first such guidance provided to its cost-estimating community and program and project managers. Despite this effort, the programs we reviewed failed to follow key cost-estimating processes, including developing and documenting full life-cycle cost estimates, summarizing estimates according to the current breakdown of work to be performed, conducting an uncertainty analysis, performing an independent review of contractors’ cost estimates, and later using earned value management (EVM) to assess progress. Reflecting Office of Management and Budget (OMB) guidance and best practices of government and industry leaders, NASA requires that full life-cycle cost estimates be prepared using full cost accounting, that estimates be summarized according to the current breakdown of work to be performed, and that major changes be tracked to the life-cycle cost. In its draft cost-estimating handbook, NASA lists a number steps that are integral to preparing a reliable life-cycle cost estimate, including preparing or obtaining a cost analysis requirements description (CARD), developing ground rules and assumptions, and developing cost range and risk assessments. Carnegie Mellon University’s Software Engineering Institute (SEI) echoes the need for reliable cost-estimating processes in managing software implementations—identifying tasks to be estimated, mapping the estimates to the breakdown of work to be performed, and identifying and explaining assumptions are among SEI’s requisites for producing reliable cost estimates. To evaluate the cost-estimating processes of the 10 NASA programs that we reviewed in detail, we selected 14 criteria based on SEI checklists (see table 2). Many of these criteria are included in NASA’s cost-estimating guidance. Despite NASA requirements and OMB and SEI guidance, few of the 10 programs that we reviewed in detail met even a third of these criteria; only one met half. Further, none of the programs fully met certain key criteria. For example, none provided a complete life cycle with definitions or a complete description of the methodology used to generate the complete cost estimate, such as data sources and uncertainties. According to the draft NASA cost-estimating handbook, a reliable life-cycle cost estimate is critical to making realistic decisions about developing or producing a system and to determining the appropriate scope or size of a program. NASA guidance also calls for breaking down the work to be performed into smaller units to facilitate cost estimating and program and contract management and to help ensure relevant costs are not omitted. However, only 3 of the 10 programs provided a complete breakdown of the work to be performed. Table 3 shows for each program the applicable criteria that were met, partially met, or not met. (See app. II for a program by program assessment.) Failing to meet these criteria puts programs at certain risk. For example, underestimating a program’s full life-cycle costs creates the risk that a program could be underfunded and subject to major cost overruns, which would ultimately result in the program being reduced in scope or additional funding being requested and appropriated to ensure the program meets its objectives. Conversely, overestimating life-cycle costs creates the risk that a program will be deemed unaffordable and would, therefore, go unfunded. Without a complete WBS, NASA programs cannot ensure that the life-cycle cost estimates have captured all relevant costs, which again can result in underfunding and cost overruns. Further, inconsistent WBS estimates across programs can create problems of double counting or, worse, underestimating costs when using historical program costs as a basis for projecting future costs on similar programs. Despite the uncertainty inherent in estimating the cost of emerging technologies, all of the 10 programs we reviewed also failed to conduct an uncertainty analysis to assess risks associated with the cost estimates. Instead, the programs expressed their cost estimates as point values—which implies certainty—not as ranges or numbers with confidence levels. Performing an uncertainty analysis, such as a Monte Carlo simulation, quantifies the amount of cost risk within a program. Only by quantifying the cost risk can management make informed decisions about risk mitigation strategies. Quantifying cost risks also provides a benchmark against which future progress can be measured. Without this knowledge, NASA may have little specific basis to determine adequate financial reserves, schedule margins, and technical performance margins to provide managers the flexibility needed to address programmatic, technical, cost, and schedule risks, as required by NASA policy. Seven of the 10 programs also failed to have an independent review of contractors’ cost estimates—as required by NASA. Instead, programs established their budgets based on contractor proposals—particularly problematic since many contractors could bid low in order to win the contract. To ensure contractor costs are realistic, NASA procedures and guidelines specifically require programs to ensure that independent reviews are conducted and that these reviews address project life-cycle costs, risk management plans, as well as technical issues. Without such reviews, NASA decision makers lacked the benchmarks needed to assess the reasonableness of the contractors’ proposed costs, limiting NASA’s ability to make sound investment decisions and accurately assess contractor performance. Finally, only two programs used EVM—an approach used by DOD and leading companies to provide meaningful assessments of a program’s progress by comparing the value of work performed to its costs, rather than the traditional management approach of comparing budgeted and actual costs, which can provide a distorted view of a program’s progress. (For a detailed discussion of EVM, see app. IV.) By using the value of completed work as a basis for estimating the cost and time needed to complete the program, EVM can alert program managers to potential problems early in the program. NASA requires that EVM be used on all significant contracts—that is, research and development contracts with a total anticipated final value of $70 million or more, and production contracts with a total anticipated final value of $300 million or more— which includes all of the 10 programs we reviewed in detail. Although the program managers for all 10 programs stated that EVM was used in their projects, only two programs provided cost performance reports, indicating a true EVM process was in place. The remaining eight programs relied on NASA Form 533, which captures planned and actual obligations and expenditures—not the value of the work performed. Without a true EVM process, programs cannot readily determine if a program is at risk of cost and schedule overruns until it is too late to make programmatic changes to avoid these risks. There are several impediments that NASA needs to overcome to implement effective cost-estimating practices. These include the lack of reliable financial data and other performance information; lack of trained EVM staff, data analysis tools, and incentive for supporting and implementing EVM; and ineffective use of cost analysts. NASA has initiated several measures to begin addressing some of these impediments. According to NASA officials, state-of-the-art cost-estimating tools have been funded and implemented. For example, NASA officials told us that commercial-off-the-shelf models have been used to estimate hardware and software acquisition costs and quantify the level of uncertainty surrounding cost estimates. However, these cost-estimating tools are only as good as the data they rely on to develop the estimates. For more than a decade, we have reported that NASA has failed to develop a system to capture reliable financial and performance information, posing significant challenges to NASA’s ability to estimate and control program costs. Over the past year alone, we issued numerous reports on NASA’s Integrated Financial Management Program (IFMP)—the agency’s third and most recent effort to implement a modern, integrated financial management system. Specifically, we found that IFMP—which is under the responsibility of the Program Executive Officer for IFMP—will not, as it is being implemented, routinely provide program managers and other key stakeholders and decision makers—including the Congress—with the financial related information needed to measure program performance and ensure accountability. For example, the core financial module (considered the backbone of the system) does not appropriately capture property, plant, and equipment, as well as material in its general ledger at the transaction level—which is needed to provide independent control over these assets. In addition, NASA implemented the system before it had the capability to capture the full costs of its programs and projects. According to headquarters officials, collecting nonfinancial data crucial to cost estimating—such as technology readiness levels, parts counts, and team and management experience and skill ratings—has also been difficult. According to headquarters officials, agencywide EVM implementation efforts began in 1996 and are recognized by NASA management as a key tool in monitoring and measuring cost trends in higher risk project elements—a tool that serves as an early warning of the need for cost-risk mitigation actions to maintain control of program costs. These officials stated that EVM has been applied to the International Space Station Program and with varying levels of emphasis to other programs and projects at different NASA centers. While all of the program managers for the 10 programs that we reviewed in detail stated that they used EVM, only 2 of the programs used a true EVM process. NASA headquarters officials identified several challenges that have affected the agency’s ability to implement EVM effectively, including a lack of staff and data analysis tools. According to officials, resource constraints have prevented the agency from staffing many project offices with appropriate personnel to fulfill all project functions. In addition, there has been little or no priority to include a trained EVM analyst, even if one were available. Headquarters officials also noted that EVM has been hampered by the lack of a practical automated software data analysis tool. Without such a tool, analyzing the contractors’ EVM cost performance reports, which contain significant amounts of data, became a cumbersome undertaking that often resulted in incomplete and untimely analyses, providing little usefulness to inform management decisions. A lack of incentive to support EVM has further undermined its use. Some project managers whom we spoke with are skeptical about the benefits of EVM and argue that it has failed to help them manage or control program costs. According to NASA headquarters officials, during proposal and contract negotiation phases, contractors have also suggested not using EVM as a way to reduce contract costs. While EVM was included in most contracts for the 10 programs we reviewed in detail—as required by NASA policy—it was used only in two programs as a cost-estimating tool. In general, EVM has been viewed by NASA as a financial reporting tool. Consequently, there is little incentive to use EVM because the data needed to report financial activity is captured elsewhere, such as in Form 533. NASA’s efforts to improve its cost-estimating processes have also been undermined by ineffective use of its limited number of cost-estimating analysts. For example, headquarters officials state that as projects entered the formulation phase, they have typically relied on program control and budget specialists—not cost analysts—to provide the financial services to manage projects. Yet budget specialists are generally responsible for obligating and expending funding—not for conducting cost analyses that underlie the budget or ensuring budgets are based on reasonable cost estimates—and, therefore, tend to assume that the budget is realistic. While NASA officials state that its cost-estimating staff is too limited to be involved in day-to-day project execution activities, they agreed that the cost analysts could be more effectively used throughout the life cycle— particularly when projects are rebaselined and independent cost estimates of project changes must be performed. In some cases, cost analysts are not appropriately located in the organization, which may compromise controls NASA has in place to ensure reasonable cost estimates. For example, some cost analysts at NASA’s centers are located with senior systems engineers in systems management organizations, while others are not. According to NASA officials, housing the cost analysts with senior systems engineers has two key benefits. First, the systems engineers generally conduct systems analyses to help ensure that a program’s requirements are properly established and that the design and validity meet the requirements. Such analyses can greatly inform the development of reasonable cost estimates. Second, the systems engineering offices afford some measures of independence for cost estimating, which, according to NASA cost- estimating guidance and procedures, is important to the overall project management process. However, NASA officials stated that several of its centers’ cost analysts are in the advocacy chain of command—not housed with senior systems engineers. For example, one center’s 15 cost analysts work in the center’s Office of the Chief Financial Officer—which is responsible for directing the development and execution of the center’s budget—not in the systems management organization, which is independent from the rest of the center. As a result, the costs analysts’ estimates may not be adequately informed by the systems engineers and may lack the objectivity required to ensure that the criteria for independence have been met. NASA has several initiatives under way to improve the agency’s cost-estimating processes. First, NASA has established a Cost Analysis Division in the Office of the Comptroller to strategically manage analyses related to directing and funding research, improving cost-estimating processes and practices, and providing cost-estimating tools and training throughout the agency. The division also provides, along with the Independent Program Assessment Office (IPAO), the last independent cost estimate of projects before the information is released externally. These efforts are being coordinated through a steering committee composed of the managers of the cost analysis organizations from each of the centers and IPAO’s deputy director. NASA is revising the cost sections in its governing procedures and guidelines and is finalizing its cost-estimating handbook to reflect these changes. These documents will require the routine use of probabilistic cost risk analysis, a CARD document, cost as an independent variable (CAIV), and EVM. The CARD supports the project life-cycle cost estimate and a congressionally required independent cost estimate. Agency officials note that while there has been some use of CARD in the agency, its first concentrated and successful use was in the 2001 to 2002 independent cost estimate for the International Space Station program. According to headquarters officials, NASA’s revised guidance and finalized cost-estimating handbook will provide direction and guidance for fully implementing the use of CARDs for major development projects. Although NASA calls for CAIV to be used routinely and notes that CAIV demonstrates a commitment to evolutionary acquisition, it has yet to provide guidance on its implementation. NASA headquarters officials stated that guidance relating to improvements in the collection of cost data is also being reflected in its revised governing procedures and guidelines. With respect to EVM, NASA headquarters officials described several efforts under way to ensure agencywide implementation of true EVM. For example, NASA recently revised its EVM policy directives to shift ownership of EVM responsibilities from NASA’s Chief Financial Officer to NASA’s Chief Engineer, to emphasize that EVM is to be considered a project management tool rather than a financial management tool. NASA officials also noted that the agency is working to inform managers of the performance management capabilities available to them through EVM and to emphasize the importance of providing adequate resources and management support to ensure successful EVM implementation. Agencywide goals for EVM implementation include promoting the effective use of EVM and providing needed training and education for program and project staff. These efforts and proposed initiatives should help resolve EVM utilization problems. Finally, NASA officials told us that the agency is planning to hire additional cost analysts to alleviate understaffing at all of its center cost analysis offices. The agency envisions a total staff of about 100 cost analysts along with additional support contractors. NASA officials also stated that it is necessary to ensure centers address the problem of having cost analysts located in the advocacy chain of command, which could affect five NASA centers. Because NASA’s initiatives have only recently been implemented or are still in the drafting or planning stage, we cannot determine to what degree these efforts will enable NASA to provide reasonable and defensible cost estimates of its programs and projects. There are numerous scientific and technical challenges inherent in the successful implementation of many NASA programs. Nevertheless, the need to choose among competing alternatives within limited budget resources makes it essential that the agency and the Congress clearly understand the costs and uncertainties of programs proposed for authorization and funding. Yet, NASA does not have the disciplined cost-estimating process needed to make informed acquisition decisions, nor does the agency have processes and tools for capturing, monitoring, and managing program costs and schedules within an implementation plan on a timely basis. This makes it difficult for senior NASA officials, program and project managers, and other key stakeholders to measure performance and initiate mitigation measures when needed. Taken together, the lack of disciplined and established cost-estimating processes and tools can cause program officials to restructure projects to available resources rather than develop realistic cost estimates and implementation plans for projects. As a result, programs may have to be modified to accommodate emerging technical, cost, and schedule realities. Ultimately, programs cost more, fail to meet their schedules, or deliver less than originally envisioned. To help minimize project costs increases and implementation delays identified in this report, NASA needs to instill disciplined cost-estimating processes into its project development and approval activities and to ensure such processes are integrated with its implementation of an integrated financial management system. Without a process that prevents programs from proceeding before they have sufficiently demonstrated that key cost-estimating criteria have been met, NASA programs will continue to be at risk of cost and schedule overruns. Improvements to NASA’s cost-estimating processes will partly depend on the agency’s ability to address recommendations that we made in November 2003 to help ensure NASA effectively implements a modern, integrated financial management system. Notwithstanding the need to address those recommendations, to better position NASA to ensure its recent initiatives result in sound cost-estimating practices agencywide, we are making three recommendations with minimum suggested courses of action. First, we are recommending that the NASA Administrator direct the Program Executive Officer for IFMP, the Chief Financial Officer, and the Chief Engineer to develop an integrated plan for improving cost estimating that, at a minimum, includes specific actions for ensuring that guidance is established on rebaselining and that rebaselining is consistently applied to provide accountability among programs, true earned value management is used as an organizational management tool to bring cost to the forefront in NASA’s management decision- making process, acquisition and earned value management policies and procedures are staff and support for cost-estimating and earned value analyses are effectively used. In addition, we recommend that the NASA Administrator direct the Chief Financial Officer to establish a standard framework for developing life- cycle cost estimates. At a minimum the framework should require each program or project to base its cost estimates on a full life cycle for the program—including all direct and indirect costs for operations and maintenance and disposal as well as planning and procurement—and on a work breakdown structure that encompass both in-house and contractor efforts, prepare a cost analysis requirements description, prepare an independent government estimate at each milestone of the conduct a cost risk assessment that identifies the level of uncertainty inherent in the estimate. Further, we recommend that the NASA Administrator develop procedures that would prohibit proposed projects from proceeding through the review and approval process when they do not address the elements of the recommended cost-estimating practices. In written comments on a draft of this report, NASA’s Deputy Administrator stated that the agency concurs with the recommendations, adding that the recommendations validate and reinforce the importance of activities underway at NASA to improve cost estimating and program management. Notwithstanding agreement with our recommendations, the Deputy Administrator believes NASA has made substantive changes and achieved significant improvements in its cost-estimating processes. For example, NASA’s comments on a draft of this report cite a 1992 GAO report (GAO/NSIAD-93-97) that found a median 77 percent increase in NASA program costs. According to the Deputy Administrator, this contrasts with a 13 percent cost growth in this present study. While there may be improvements in the percent of cost growth of some projects, such declines in cost growth are often achieved by rescoping and rebaselining projects to remain within available resources, as was demonstrated in a number of projects discussed in this report. We do not believe other examples cited by the Deputy Administrator, namely termination of the Checkout and Launch Control System and cost control measures imposed on the International Space Station, demonstrate that NASA has made substantive changes and achieved significant improvements in its cost- estimating processes. Rather, we believe these examples demonstrate what happens when projects are undertaken without a full understanding of the potential costs and management challenges inherent in many of the programs NASA proposes and then implemented without adequate financial management systems in place. With regard to our recommendation to develop guidelines for rebaselining and ensure effective use of earned value management, the Deputy Administrator cited the development of revised direction on program and project management and a refocus on risk and cost-risk analysis. NASA also now requires the establishment of cost thresholds that, if exceeded, will require a rebaselining review. Further, because much of NASA’s work is performed through grants and contracts, NASA’s revised procedures will emphasize how risk and technical complexity affect contractor performance. New earned value management and acquisition policies and procedures will be implemented through program management councils that will review and approve programs and projects regularly through each step of their development. Also, a new Cost Analysis Division has been established, and cost-estimating staff has been added to it and NASA’s Independent Program Assessment Office. NASA also noted the importance of training needed to match the new requirements. NASA’s Deputy Administrator also concurred with our recommendation to establish a standard framework for developing life-cycle cost estimates. According to the Deputy Administrator, NASA’s new processes and procedural requirements document will define the full life-cycle cost to include development, operations, maintenance, disposal, and all NASA in- house direct and indirect costs to eliminate ambiguity and ensure consistency. NASA’s revised cost-estimating handbook will provide further guidance for life-cycle cost estimates. Also, project managers will be responsible for developing and maintaining a cost analysis requirements document similar to a tool DOD uses that will include the equivalent of a project and technical description; key performance parameters, including documentation of actual work breakdown structure cost elements; and initial and annual updates of the life-cycle cost estimates. NASA guidance will also require periodic independent cost estimates on major programs and approval by the respective program management council to enter into implementation after an independent estimate has been completed. Lastly, NASA’s Deputy Administrator concurred with our recommendation to prohibit proposed projects from proceeding through the review and approval process when they do not address the elements of the recommended cost-estimating practices. Accordingly, NASA’s forthcoming procedural requirements will define the authority of the program management councils that will, according to NASA, enforce the requirements, including the required information, documentation, and management methods needed for proceeding through the review and approval process. The Deputy Administrator also noted the availability of recent management information system improvements that enhance visibility over project and program performance. In his general comments, the Deputy Administrator also stated that NASA had recently taken steps to address issues raised in the draft report and suggested a report title that would better reflect that progress. We agree that NASA has initiated number of reforms to its project development and implementation processes that, if properly implemented, would be positive steps to addressing many of the problems noted in this report. However, we also note that some of these problems have been long-standing in the projects discussed in this report and in a number of other projects we and NASA’s Office of Inspector General have reviewed. Furthermore, planned improvements in the past have fallen short of agencywide implementation. For example, poor or inadequate cost estimates and management oversight have been central to the problems that plagued several programs, including those intended to develop new space transportation and the International Space Station programs. A reliable financial management structure is central to the success of many measures noted by the Deputy Administrator in his reply. We recently reported and testified on the impediments that exist in achieving such a capability. Finally, we note that contract management has been a long- standing problem at NASA. In 1990, we identified NASA’s contract management function as an area at high risk. During that time, there was little emphasis on end results, product performance, and cost control. NASA found itself procuring expensive hardware that did not work properly. This report shows that these types of problems still exist. Regarding the Deputy Administrator’s suggestion that we revise the title of our report to reflect recent progress that NASA has made towards addressing issues that we raise, we believe NASA’s improvements have been properly reflected in our report’s title. We considered the concerns expressed in the Deputy Administrator’s comments, and consistent with our stated position that NASA’s improvements are positive steps but that its problems still persist, we revised the title accordingly. Finally, until NASA’s integrated financial management system, which is central to providing effective management and oversight, is fully implemented, performance assessments relying on cost data may be incomplete and full costing will be only partially achieved. And until these problems are resolved and the measures the Deputy Administrator noted in commenting on a draft of this report are fully implemented and integrated into the way the agency does business, NASA’s contract management function will continue to be an area of concern. As agreed with your office, unless you announce its contents earlier, we will not distribute this report further until 30 days from its date. At that time, we will send copies to the NASA Administrator and interested congressional committees. We will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any question concerning this report, please contact me at (202) 512-4841 or [email protected]. Key contributors to this report are acknowledged in appendix VI. To determine cost estimates in selected NASA programs and any changes in those estimates, we asked NASA to provide a list of programs that were currently in the development phase, and programs that had completed development or were launched in fiscal year 2001or 2002. We also asked NASA to provide the initial baseline development cost estimate and current cost estimate for the development phase and life of the program, and the reasons for changes to initial development cost estimates. NASA identified 68 programs that were currently in development or had completed development in fiscal years 2001 and 2002. These included planetary missions and Earth observatory, aeronautical technology, and space flight systems. From that universe, we selected at least one program (10 in total) from 5 of NASA’s 7 Enterprises. This involved 6 of 9 NASA centers (and the Jet Propulsion Laboratory) with lead responsibility for one or more of these programs. Our selection was generally based on programs with the highest current development cost estimates within an Enterprise. We compared the initial development cost estimates NASA provided to the current development cost estimates for the programs. The initial development estimates generally reflect the projected costs at the time a new program was first approved by the Congress. The current development and life-cycle cost estimates reflect the latest estimates provided by NASA as of April 2003. We also interviewed program officials to obtain additional information related to NASA’s revisions to initially established baseline development cost estimates, including the rationale for changes to the cost estimates. We also analyzed the initial and current development cost estimates for 17 additional NASA programs, later added to the scope of our review, to ascertain the level of cost growth or decline as those programs progressed through the development phase. To assess NASA’s cost-estimating processes and methodologies, we used cost-estimating criteria developed by Carnegie Mellon University’s Software Engineering Institute (SEI) designed to assess the reliability of project cost and schedule estimates. SEI is a government-funded research organization that is widely considered an authority on software implementation. SEI developed checklists with these criteria to help evaluate software costs and schedule; however, SEI states that these checklists are equally applicable to hardware and systems engineering projects. We first analyzed NASA’s cost-estimating procedures and guidelines to determine if they incorporated key components of good cost-estimating practices advocated by SEI and other experts. Based on that analysis, we selected 14 criteria from two SEI reports to use in assessing NASA’s cost-estimating practices for the 10 programs we selected to review in detail. Our selection of the 14 criteria from the SEI reports was based, in part, on their commonality with NASA cost- estimating procedures and guidelines. Finally, using the cost-estimating documentation provided by NASA for the 10 programs, we determined the extent to which the programs met the 14 criteria. If a program provided substantiating evidence for a criterion, we determined that the program “fully met” the criterion. If partial evidence was provided for a criterion, we determined the program “partially met” the criterion. If no evidence was found, then we determined that the criterion was “not met.” Table 2 describes each of the 14 criteria and the significance of each criterion. To identify any barriers that make it difficult to improve any weaknesses in NASA’s cost-estimating processes, we reviewed our recent work on NASA’s efforts to implement a modern integrated financial management system. We also provided questions to NASA headquarters that asked for information regarding NASA’s ability to use its cost estimates as a management tool for its programs. We also provided questions related to the SEI criteria, and NASA’s responses to these questions provided further insight into the agency’s cost-estimating management process at the organizational level. In addition, we interviewed officials in NASA headquarters’ Office of the Chief Financial Officer and Office of the Chief Engineer, and the center project managers for the 10 programs and other appropriate personnel to obtain further perspective on this issue. To accomplish our work, we visited NASA headquarters, Washington, D.C., and Goddard Space Flight Center, Maryland. We also contacted officials at Marshall Space Flight Center, Alabama; Jet Propulsion Laboratory, California; Kennedy Space Center, Florida; Glenn Research Center, Ohio; Johnson Space Center, Texas; and Langley Research Center, Virginia. We conducted our work from February 2003 to March 2004 in accordance with generally accepted government auditing standards. This appendix provides a program by program assessment of the 10 NASA programs we reviewed in detail. Each assessment provides a brief description of the program’s mission; the status of the program—that is, whether it is in development, operational, or terminated; the year the program was initiated; the fiscal year in which the Congress approved the program—that is, when full-scale design and development funds were appropriated; a comparison of the initial and current (as of April 2003) baseline an assessment of the program’s cost-estimating processes, methodologies, and practices to determine the extent they met the 14 cost-estimating criteria that we used to measure program performance. (Table 4 shows for each criterion the number of programs that met, partially met, or did not meet the criterion.) In addition to the 10 programs that we reviewed in detail, we analyzed the initial and current development cost estimates for 17 other NASA programs. NASA’s TIMED satellite is conducting the first global study of the Earth’s mesosphere, lower thermosphere, and ionosphere—segments of the Earth’s atmosphere located between 40 and 110 miles above the planet. Initially, TIMED’s mission was to last 2 years, beginning with its launch in December 2001, but NASA extended the satellite’s orbital operations through 2006. TIMED’s goal is to improve our understanding of the influences the sun and humans have on this “gateway region” as well as the effects of its atmospheric variability on satellites and spacecraft reentering the Earth’s atmosphere. INTEGRAL is a European Space Agency mission, with Russian and U.S. involvement. Launched in October 2002, the INTEGRAL satellite is equipped with two telescopes designed to register elusive gamma rays— some of the universe’s most energetic radiation—and give insight into the most violent processes in our universe. Through INTEGRAL, scientists plan to study black holes’ interaction with their surroundings, the explosion of supernovae and their role in forming chemical elements, the nature of powerful gamma-ray bursts, and transient sources that suddenly change brightness. U.S. participation consists of co-investigators providing hardware and software components to the spectrometer and imager instruments, a co-investigator for the data center, a mission scientist, and a provision for ground tracking and data collection. Rosetta is a European Space Agency mission whose objectives are to study the origin of and the relationship between comets and interstellar material and to improve our knowledge of the origins of the Solar System. The Rosetta satellite was launched in March 2004 and, after a long cruise phase, is planned to rendezvous with comet Churyumov-Gerasimenko in 2014. Plans call for Rosetta to orbit the comet while taking scientific measurements and to position a probe on the comet surface to take in-situ measurements. U.S. involvement includes developing three remote-sensing instruments and a subsystem for a fourth instrument. Currently scheduled to launch during a 15-day period that opens July 30, 2004, the MESSENGER spacecraft is intended to collect images of Mercury. Through these images, NASA scientists hope to determine Mercury’s geological history and the nature of its surface composition, core, poles, exosphere and magnetosphere, and magnetic field. This information is expected to provide scientists with a better understanding of how Earth was formed, how it evolved, and how it interacts with the sun. Through STEREO—an international collaboration involving France, Germany, the United Kingdom, and the United States—NASA plans to trace the flow of energy and matter from the sun to Earth by studying the solar origin of coronal mass ejections, their evolution in the heliosphere, and their effects on geospace. Twin STEREO observatories, scheduled to be launched in November 2005, will be used to develop a three- dimensional, time-dependent model of the magnetic topology, temperature, density, and velocity structure of the ambient solar wind. Because coronal mass ejections are the prime drivers of major space weather hazards, STEREO is expected to greatly improve our understanding of the most severe disturbances of the Sun-Earth system. The observatories will also provide a continuous data stream for the purpose of real-time space weather forecasts. The SOFIA observatory—a modified Boeing 747 aircraft with a permanently installed telescope, which NASA plans to begin flying in 2005—will be used to study different astronomical objects and phenomena, including star births and deaths; solar system formations; complex molecules in space; planets, comets, and asteroids in our solar system; nebulae and dust in galaxies; and black holes at the centers of galaxies. The telescope, provided through a partnership with the German Aerospace Center, is designed to provide routine access to nearly all of the visual, infrared, far-infrared, and submillimeter parts of the spectrum. As such, SOFIA is expected to extend the range of astrophysical observations significantly beyond that of previous infrared airborne observatories through increases in sensitivity and angular resolution. NASA plans to incorporate new or upgraded technologies over the aircraft’s lifetime to allow additional scientific exploration. Because most of the instruments are to be designed and built by graduate students and post-doctoral scientists in universities throughout the United States, SOFIA will serve as a training ground for the next generation of instrument builders. The Solar-B program’s objectives are to investigate the interaction between the Sun’s magnetic field and its corona and to understand the sources of solar variability. Solar-B is a Japanese Institute of Space and Astronautical Science mission, with significant U.S. involvement, and follows the Solar-A collaboration among Japan, the United Kingdom, and the United States. The observatory is designed to consist of a set of optical, extreme ultraviolet, and X-ray instruments, and NASA is expected to provide components for each. The Solar-B observatory is scheduled to be launched on a Japanese M-V rocket out of Kagoshima, Japan, in September 2006. Herschel Space Observatory The European Space Agency’s Herschel Space Observatory (formerly the Far Infrared and Submillimetre Telescope, or FIRST) houses an infrared telescope that is expected to observe virtually unexplored spectrum wavelengths that cannot be observed from the ground. Scheduled for launch in February 2007, Herschel is expected to enable scientists to better understand galaxy formation, evolution in the early universe, and the nature of active galaxy power sources; star-forming regions and interstellar medium physics in the Milky Way and other galaxies; and the molecular chemistry of cometary, planetary, and satellite atmospheres in our solar system. NASA is providing components for two of the three instruments that will be flown on Herschel: the Heterodyne Instrument for Far Infrared and the Spectral and Photometric Imaging Receiver. Launched in February 2000, Terra is providing measurements that, according to NASA, are significantly contributing to the understanding of the total Earth system. Specifically, Terra is collecting 200 gigabytes of data each day on the earth’s physical and radiative properties of clouds, air-land and air-sea exchanges of energy, carbon, and water as well as measurements of trace gases and volcanology. One of the first operational uses of Terra was to provide imagery to support the U.S. Forest Service’s efforts to combat forest fires in the western United States. Through Terra, fire fighters were able to identify the locations of active fires, instead of locations of smoke, providing them with the data needed to better control spreading fires. Terra data were also used by the Geography Department of Dartmouth College in New Hampshire to assist in flood hazard reduction programs. NASA’s New Millennium Program (NMP) is designed to identify, develop, and flight-validate key instrument and spacecraft technologies that can enable new or more cost-effective approaches to conducting science missions. EO-1—the first NMP mission, launched in November 2000— includes three land imaging instruments that are expected to lead to a new generation of lighter weight, higher performance, and lower cost Landsat-type Earth surface imaging instruments. The mission of the Jason-1 program, a cooperative effort with the French Space Agency, is to study the global oceans. Launched in December 2001, the Jason-1 satellite was expected to monitor ocean circulation and events such as El Nino and ocean eddies and to improve global climate forecasts and predictions. The Jason-1 satellite was positioned to orbit the earth in tandem with TOPEX/Poseidon, an earlier generation satellite launched in 1992, to provide data to the National Oceanic and Atmospheric Administration. The SeaWinds satellite, launched in December 2002, is providing high- resolution, ocean surface wind data used for studies of ocean circulation, climate, and air-sea interaction to understand global climate changes and weather patterns better. By using long-term wind data in numerical weather and wave prediction models, SeaWinds is expected to improve weather forecasts near coastlines and storm warning and monitoring. The Calipso satellite, scheduled for launch in 2005, is being designed to study the effect that aerosols and clouds have on the Earth’s radiation balance, which ultimately controls the temperature of the Earth. Calipso is expected to provide scientists with data to construct three-dimensional structures of the atmosphere, enabling new observationally based assessments of the radiative effects of aerosol and clouds that will greatly improve our ability to predict future climate change. NASA plans to fly Calipso in formation with Aqua and CloudSat, a satellite being designed to measure the vertical structure of clouds from space and contribute to a better understanding of the role of clouds in the Earth’s climate system. The Calipso program is a cooperative effort with France. The X-38 Crew Return Vehicle was cancelled in April 2002, due to its single purpose design and the potentially high costs identified by an independent assessment. The purpose of the CRV project was to initiate work toward an independent U.S. crew return capability for the International Space Station. As envisioned, CRV was expected to serve as a back-up to the space shuttle orbiters by providing resupply to the station or change-out crew, or accommodating safe return for up to seven crew members who may be ill or injured or in the event that a catastrophic failure of the station made it unable to support life. ATP’s primary objectives were to significantly improve the safety and operating margins of the high-pressure turbopump in the space shuttle’s main engine and to eliminate the need to remove the turbopump for postflight maintenance. An alternative turbopump was successfully implemented in the shuttle launched in April 2002. According to NASA, ATP’s development contract, signed in December 1986, specifically addressed shortcomings of the previous turbopumps; took advantage of the latest technologies; and applied lessons learned. The contract called for the parallel development of two high-pressure turbopumps—one that operates on oxidization and one on fuel. However, 5 years into the program, technical problems prompted NASA to end parallel development and concentrate first on developing the oxidizer turbopump, which was first flown in July 1995. Although development of the fuel turbopump resumed in 1994, extreme high temperatures, pressures, and rotor speeds resulted in significant design challenges and the design certification review was not completed until March 2001. The full implementation of the fuel turbopump into flight was completed beginning with the April 2002 shuttle flight. In December 2002, the TDRS Replenishment project achieved its goal: launch three geosynchronous satellites to replace the existing aging satellite constellation, and thereby continue to provide space network tracking, data, voice, and video services to NASA scientific satellites, the Space Shuttle program, the International Space Station, and other NASA customers. According to NASA, the functional and technical performance requirements for the replacement satellites—launched in June 2000, March 2002, and December 2002—are virtually identical to those of the previous satellites. AHMS is expected to provide safe shutdown of the space shuttle main engine during potentially catastrophic high-pressure turbopump failures through improved monitoring of engine vibration and anomaly response capabilities. According to NASA, AHMS modifications include (1) adding a vibration redline monitor for high pressure turbopumps, (2) doubling memory capacity and employing radiation tolerant memory, (3) adding an external communication interface for a potential phase-two health management computer, and (4) eliminating existing memory retention batteries and replacing them with nonvolatile memory. While NASA stated the AHMS will be available for launch in January 2005, the shuttle fleet’s return to flight date is planned for March or April 2005. Earned value management (EVM) goes beyond the two-dimensional approach of comparing budgeted costs to actuals. Instead, it attempts to compare the value of work accomplished during a given period with the work scheduled for that period. By using the value of completed work as a basis for estimating the cost and time needed to complete the program, earned value can alert program managers to potential problems early in the program. An accurate, valid, and current performance management baseline is needed to perform useful analyses using EVM. In 1996, in response to acquisition reform initiatives, the Department of Defense (DOD) adopted 32 criteria for evaluating the quality of management systems. In general terms, the 32 criteria require contractors to (1) define the contractual scope of work using a work breakdown structure; (2) identify organizational responsibility for the work; (3) integrate internal management subsystems; (4) schedule and budget authorized work; (5) measure the progress of work based on objective indicators; (6) collect the cost of labor and materials associated with the work performed; (7) analyze any variances from planned cost and schedules; (8) forecast costs at contract completion; and (9) control changes. The criteria have become the standard for EVM and have been adopted by major U.S. government agencies, industry, and the governments of Canada and Australia. The full application of EVM system criteria is appropriate for large cost reimbursable contracts where the government bears the cost risk. For such contracts, management discipline prescribed by the criteria is essential. In addition, data from an EVM system have been proved to provide objective reports of contract status, allowing numerous indices and performance measures to be calculated. These can then be used to develop accurate estimates of anticipated costs at completion, providing early warning of impending schedule delays and cost overruns. Table 5 lists the 32 criteria, organized into five basic categories: organization, planning and budgeting, accounting considerations, analysis and management reports, and revisions and data maintenance. The standard format for tracking earned value is through a cost performance report (CPR). The CPR is a monthly compilation of cost, schedule, and technical data, which displays the performance measurement baseline, any cost and schedule variances from that baseline, the amount of management reserve used to date, the portion of the contract that is authorized unpriced work, and the contractor’s latest revised estimate to complete the program. As a result, the CPR can be used as an effective management tool because it provides the program manager with early warning of potential cost and schedule overruns. Using data from the CPR, a program manager can assess trends in cost and schedule performance. This information is useful because trends tend to continue and can be difficult to reverse. Studies have shown that once programs are 15 percent complete, the performance indicators are indicative of the final outcome. For example, a CPR showing a negative trend for schedule status would indicate that the program is behind schedule. By analyzing the CPR, one could determine the cause of the schedule problem such as delayed flight tests, changes in requirements, or test problems because the CPR contains a section that describes the reasons for the negative status. A negative schedule can be a predictor of later cost problems because additional spending is often necessary to resolve problems. CPR data also provide the basis for independent assessments of a program’s cost and schedule status and can be used to project final costs at completion in addition to determining when a program should be completed. Examining a program’s management reserves is another way that a program can use a CPR to determine potential issues early on. Management reserves, which are funds that may be used as needed, provide flexibility to cope with problems or unexpected events. EVM experts agree that transfers of management reserves should be tracked and reported because they are often problem indicators. An alarming situation arises if the CPR shows that the management reserves are being used at a faster pace than the program is progressing toward completion. For example, a problem would be indicated if a program has used 80 percent of its management reserves, but only completed 40 percent of its work. A program’s management reserves should contain at least 10 percent of the cost to complete a program so that funds will always be available to cover future unexpected problems that are more likely to surface as the program moves into the testing and evaluation phase. Staff making key contributions to this report were Jerry Herley, Shirley Johnson, Charles Malphurs, Karen Sloan, Madhav Panwar, Karen Richey, Jennifer Echard, and Deborah Lott. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
For more than a decade, GAO has identified the National Aeronautics and Space Administration's (NASA) contract management as a high-risk area--in part because of NASA's inability to collect, maintain, and report the full cost of its programs and projects. Lacking this information, NASA has been challenged to manage its programs and control program costs. The scientific and technical expectations inherent in NASA's mission create even greater challenges--especially if meeting those expectations requires NASA to reallocate funding from existing programs to support proposed new efforts. Because cost growth has been a persistent problem in a number of NASA programs, GAO was asked to examine NASA's cost estimating for selected programs, assess NASA's cost-estimating processes and methodologies, and describe any barriers to improving NASA's cost-estimating processes. To conduct GAO's work, GAO analyzed a total of 27 NASA programs--10 of which GAO reviewed in detail. Considerable change in NASA's program cost estimates--both increases and decreases--indicates that NASA lacks a clear understanding of how much its programs will cost and how long they will take to achieve their objectives. For example, the development cost estimates for more than half of the 27 programs that GAO reviewed have increased and for some programs this increase was significant--as much as 94 percent. Cost estimates changed for each of 10 programs that GAO reviewed in detail. For 8 of the 10 programs, the estimates increased. Although NASA cited specific reasons for the changes, such as technical problems and funding shortages, the variability in the cost estimates indicates that the programs lacked the sufficient knowledge needed to establish priorities, quantify risks, and make informed investment decisions, and thus predict costs. Most notably, NASA's basic cost-estimating processes--an important tool for managing programs--lack the discipline needed to ensure that program estimates are reasonable. Specifically, GAO found that none of the 10 NASA programs that GAO reviewed in detail met all of GAO's cost-estimating criteria, which are based on criteria developed by Carnegie Mellon University's Software Engineering Institute. Moreover, none of the 10 programs fully met certain key criteria--including clearly defining the program's life cycle to establish program commitment and manage program costs, as required by NASA. In addition, only three programs provided a breakdown of the work to be performed. Without this knowledge, the programs' estimated costs could be understated and thereby subject to underfunding and cost overruns, putting programs at risk of being reduced in scope or requiring additional funding to meet their objectives. Finally, only two programs have a process in place for measuring cost and performance to identify risks. NASA has limited ability to collect the program cost and schedule data needed to meet basic cost-estimating criteria. For example, as GAO has previously reported, NASA does not have a system to capture reliable financial and performance data--key to using effectively the cost-estimating tools that NASA officials state that programs employ. Further, without adequate financial and nonfinancial data, programs cannot easily track an acquisition's progress and assess whether the program can meet its cost and schedule goals before it incurs significant cost and schedule overruns. NASA identified other barriers, including limited cost-estimating staff. According to NASA officials, several initiatives are under way to remove such obstacles and improve the agency's cost-estimating practices.
NARA’s mission is to safeguard and preserve the records of the U.S. government, ensuring that the people can discover, use, and learn from this documentary heritage. In this way, NARA is to ensure continuing access to the essential documentation of the rights of American citizens and the actions of their government. In carrying out this mission, NARA (among other things) is to provide guidance and assistance to federal officials on the management of records; determine the retention and disposition of records; store agency records in records centers from which agencies can retrieve them; receive, preserve, and make available permanently valuable federal and presidential records; and centrally file and publish federal laws and administrative regulations, the President’s official orders, and the structure, functions, and activities of federal agencies through the daily Federal Register. NARA is organized into six main offices, as well as a number of offices carrying out particular functions. As shown in the organization chart in figure 1, of NARA’s six major offices, two are support offices (the Office of Administration and the Office of Information Services), and four carry out the organization’s primary missions (the Offices of Records Services, Washington, D.C.; Regional Records Services; the Federal Register; and Presidential Libraries). In addition, four independent offices with specialized missions report directly to the Archivist of the United States, and various staff offices (such as General Counsel) provide support. Table 1 shows these organizations, major functions, and the levels of staff in each (expressed as full-time equivalent—FTE). NARA’s operations are dispersed throughout more than 40 facilities in the United States. These facilities include the National Archives Building in Washington, D.C. (housing the nation’s founding documents); the nearby Archives II facility in College Park, Maryland; and its nationwide network of regional archives, records centers, and presidential libraries and museums. Two offices share primary responsibility for performing NARA’s mission to safeguard and preserve federal records: the Office of Records Services, Washington, D.C., and the Office of Regional Records Services. These offices also account for more than two-thirds of NARA’s approximately 3,200 FTE employees. The Office of Records Services, Washington, D.C., has custodial responsibility for the historically valuable records of the three branches of the federal government in the Washington, D.C., area. Through its programs, the office appraises, accessions, preserves, describes, and provides access to these records. Besides the head office, it has six main subdivisions, as shown in table 2. The Office of Regional Records Services is organized into nine regions (each headed by a Regional Administrator) plus the National Personnel Records Center in St. Louis, Missouri. Each region operates records centers, regional archives, and records management programs for the region. In all, the Office of Regional Records Services manages 17 records centers nationwide, which operate on a reimbursable fee-for-service basis. They provide federal agencies with storage of agency records not needed in day-to-day operations, among other services, including records management assistance. The regional archives provide the public with free access to the significant historical records of federal agencies for purposes of education, genealogy, history, and research, as well as to facilitate publications in all media. Of the staff of more than 1,400, about half (745) are assigned to the Federal Personnel Records Center, with the rest allocated to the regions. In addition, the Office of Information Services plays an important role in records management and preservation through two of its components: The Electronic Records Archives Program Office manages the program to develop ERA, a system that is intended to preserve and provide access to huge volumes of all types and formats of electronic records, independent of their original hardware or software. The Center for Advanced Systems and Technologies works to discover and promote archival technologies, including preservation technologies, to NARA’s offices. To coordinate records management activities that are performed in both headquarters and the regions, NARA set up the National Records Management Program, headed by the Director of Modern Records Programs within the Office of Records Services, Washington, D.C. Among the goals of setting up the National Records Management Program was to be more responsive to NARA and agency records management needs and goals, improve internal communications, and help clarify staff roles and responsibilities. The program includes about 100 records management staff working in both the Washington, D.C., office and the regions (each region operates one or more records center facilities, each of which has two to four staff that perform records management work). NARA’s fiscal year 2009 appropriation was about $459 million, while its fiscal year 2010 appropriation is about $470 million. NARA’s budget request for fiscal year 2011 is about $460 million. In addition to annual appropriations acts, NARA’s operations are funded through revenues from the National Archives Trust Fund, Gift Fund, and Revolving Fund (which funds the operations of the regional records centers). NARA’s operations at the presidential libraries are also partially supported by a Presidential Library Trust Fund. Figure 2 provides the reported breakdown of the allocation of budget authority provided to NARA by annual appropriations acts for fiscal year 2009 ($486 million, which includes $26 million carried over in multi-year and no-year funds available for obligation). The Federal Records Act gives NARA responsibilities regarding both federal records management and preservation of permanent records. Thus, NARA supports agency management of records used in everyday operations (both temporary and permanent) and ultimately takes control of permanent agency records judged to be of historic value. Of the total number of federal records, NARA estimates that less than 3 percent are designated permanent. By statute, some of the responsibilities for oversight of federal records management are divided across several agencies. Under the Federal Records Act, NARA shares a number of records management responsibilities and authorities with the General Services Administration (GSA). Under the Paperwork Reduction Act and the E-Government Act, the Office of Management and Budget (OMB) also has records management oversight responsibilities. Further, the heads of federal agencies are responsible for their agencies’ records. The Federal Records Act establishes requirements for records management programs in federal agencies. Each federal agency is required to make and preserve records that (1) document the organization, functions, policies, decisions, procedures, and essential transactions of the agency and (2) provide the information necessary to protect the legal and financial rights of the government and of persons directly affected by the agency’s activities. (NARA is assigned responsibilities for assisting federal agencies in this area.) Effective management of these records is critical for ensuring that sufficient documentation is created; that agencies can efficiently locate and retrieve records needed in the daily performance of their missions; and that records of historical significance are identified, preserved, and made available to the public. Records must be managed at all stages of their life cycle, which includes records creation or receipt, maintenance and use, and disposition. Agencies create records to meet the business needs and legal responsibilities of federal programs and (to the extent known) the needs of internal and external stakeholders who may make secondary use of the records. To maintain and use the records created, agencies are to establish internal recordkeeping requirements for maintaining records, consistently apply these requirements, and establish systems that allow them to find records that they need. Disposition involves transferring records of permanent, historical value to NARA for archiving (preservation) and destroying all other records that are no longer needed for agency operations. NARA is responsible for issuing records management guidance; working with agencies to implement effective controls over the creation, maintenance, and use of records in the conduct of agency business; approving the disposition (destruction or preservation) of records; and providing storage facilities for agency records. The Federal Records Act also gives NARA the responsibility for conducting inspections or surveys of agencies’ records and records management programs and practices; conducting records management studies; and reporting the results of these activities to the Congress and OMB. Under the Federal Records Act, disposition of any records (destruction or transfer to the Archives for preservation) requires the approval of the Archivist of the United States. Scheduling is the means by which NARA and agencies identify federal records and determine time frames for disposition. Creating records schedules involves identifying and inventorying records, appraising their value, determining whether they are temporary or permanent, and determining how long records should be kept before they are destroyed or turned over to NARA for archiving. For example, one general records schedule permits civilian agencies to destroy case files for merit promotions (2 years after the personnel action is completed or after an audit by the Office of Personnel Management, whichever is sooner). No record may be destroyed or permanently transferred to NARA unless it has been scheduled, so the schedule is of critical importance. Without schedules, agencies would have no clear criteria for when to dispose of records and, to avoid disposing of them unlawfully, would have to maintain them indefinitely. NARA works with agencies to help schedule records, and it must approve all agency records schedules. It also develops and maintains general records schedules covering records common to several or all agencies. According to NARA, records covered by general records schedules make up about a third of all federal records. For the other two thirds, NARA and the agencies must agree upon agency-specific records schedules. Destruction of records before their scheduled disposition date or without NARA approval is unauthorized and unlawful. Specifically, unlawful destruction occurs when permanent records or records that have not been scheduled are destroyed, temporary records are destroyed before the end of their retention period, or records required to be held for other reasons, such as litigation or Freedom of Information Act requests, are destroyed. Agency heads are responsible for preventing unauthorized destruction of records and must make sure employees are informed about the requirement, implement policies and procedures to ensure that records are protected, and report any unauthorized removal or destruction to NARA. If NARA learns of a potential or actual instance of unlawful destruction, the Archivist is required to notify the agency head and assist in initiating action through the Attorney General for the recovery of records and other redress. If the head of the agency does not initiate an action within a reasonable period of time, the Archivist is to request the Attorney General to act and notify the Congress. As the nation’s archivist, NARA is the legal custodian of the records of federal agencies that are determined to have sufficient historical or other value to warrant their continued preservation by the U.S. government. NARA also accepts for deposit to its archives many of the records of the Congress, the Architect of the Capitol, and the Supreme Court. In addition, NARA accepts papers and other historical materials of the presidents of the United States, documents from private sources that are appropriate for preservation (including electronic records, motion picture films, still pictures, and sound recordings), and records from agencies whose existence has been terminated. NARA archives vast quantities of federal records in various formats. According to the agency, its 28 archives and presidential libraries across the United States hold almost 4 million cubic feet of permanent federal paper, photographic, audio, video, and film records and 600,000 artifacts. Its multimedia collections include nearly 300,000 reels of motion picture film; more than 15 million maps, charts, aerial photographs, architectural drawings, patents, and ship plans; more than 200,000 sound and video recordings; and nearly 6 million photographs and graphics. In addition, as of August 2010, NARA has archived about 82.4 terabytes of electronic information. To preserve electronic records, NARA has been working since 2001 to develop an electronic records archive system that is intended to preserve and provide access to very large volumes of all types and formats of electronic records, independent of their original hardware or software. NARA plans for the system to manage the electronic records from their ingestion through preservation and dissemination to customers. The ERA system is being developed in five phases, or increments. NARA has certified initial operating capability of the first two phases of ERA. NARA plans to complete development of the remaining increments and achieve full operating capability by 2012. We have previously reported on the risks NARA faces in its acquisition of ERA, on NARA’s oversight of federal records management, and on the challenges of electronic records management. From 2002 onward, we have issued a series of reports on ERA and its development. Most recently, we reported, among other things, that NARA’s plans for completing the final two increments were not sufficiently specific: the most recent expenditure plan did not detail what system capabilities would be delivered in the final two ERA increments or dates for completion. Further, NARA’s management of the requirements for ERA had weaknesses: NARA had established an initial set of high-level requirements to guide the system’s development, but about 43 percent of the requirements had not been allocated to the last two increments, and NARA officials stated that it was uncertain whether they would be implemented at all. Finally, NARA stated that it had reinterpreted some of the requirements in its original requirements document but had not updated it. The lack of a current set of requirements is a significant risk. If requirements are incomplete and out of date, the system could be completed without addressing all necessary requirements or with functionality that meets requirements that are no longer valid. Accordingly, we recommended that NARA ensure that ERA’s requirements are managed using a disciplined process that ensures that requirements are traceable throughout the project’s life cycle and are kept current. We have also made recommendations to NARA on its oversight of federal records management. Our recommendations were aimed at improving NARA’s insight into the state of federal records management as a basis for determining where its attention is most needed. In 1999, in reporting on the substantial challenge of managing and preserving electronic records in an era of rapidly changing technology, we noted that NARA did not have governmentwide data on the electronic records management capabilities and programs of all federal agencies. Accordingly, we recommended that NARA conduct a governmentwide survey of these programs and use the information as input to its efforts to re-engineer its business processes. However, instead of doing a governmentwide baseline assessment survey as we recommended, NARA planned to obtain information from a limited sample of agencies, stating that it would evaluate the need for such a survey later. In 2002, we reported that because NARA did not perform systematic inspections of agency records management, it did not have comprehensive information on implementation issues and areas where guidance needed strengthening. We noted that in 2000, NARA had suspended agency evaluations (inspections) because it considered that these reached only a few agencies, were often perceived negatively, and resulted in a list of records management problems that agencies then had to resolve on their own. We recommended that it develop a strategy for conducting systematic inspections of agency records management programs to (1) periodically assess agency progress in improving records management programs and (2) evaluate the efficacy of NARA’s governmentwide guidance. In response to our recommendations, NARA devised a strategy for a comprehensive approach to improving agency records management that included inspections and identification of risks and priorities. Subsequently, it also developed an implementation plan that included undertaking agency inspections based on a risk-based model, government studies, or media reports. In 2008, we reported that under its oversight strategy, NARA had performed or sponsored six records management studies in the previous 5 years, but it had not conducted any inspections since 2000 because it used inspections only to address cases of the highest risk, and no recent cases met its criteria. In addition, NARA’s reporting to the Congress and OMB did not consistently provide evaluations of responses by federal agencies to its recommendations, as required, or details on records management problems or recommended practices that were discovered as a result of inspections, studies, or targeted assistance projects. Accordingly, we recommended that NARA develop and implement an oversight approach that provides adequate assurance that agencies are following NARA guidance, including both regular assessments of agency records and records management programs and reporting on these assessments. NARA agreed with our recommendations and devised a strategy that included annual self-assessment surveys, inspections, and reporting, which it has now begun to implement. Most recently, we testified on the challenges of managing electronic records and commented on the low priority that records management has historically received within the federal government. Our past reports identified persistent weaknesses in federal records management, including a lack of policies and training. We also noted some of the challenges of managing electronic records: For example, electronic information is being created in volumes that pose a significant technical challenge to our ability to organize and make it accessible. Electronic records range in complexity from simple text files to highly complex formats with embedded computational formulas and dynamic content, and new formats continue to be created. Further, in a decentralized environment, it is difficult to ensure that records are properly identified and managed by end users on individual desktops (the “user challenge”). We concluded that technology alone cannot solve the problem without commitment from agencies, noting (among other things) that automation will not solve the problem of lack of priority given to records management, which is of long standing. The effectiveness of NARA’s oversight activity has been improved by recent initiatives. However, these initiatives have limitations, and NARA’s oversight alone cannot solve the persistent problems facing federal records management. NARA has begun to increase its efforts to assess governmentwide records management and its reporting of results. Although the Federal Records Act gives NARA responsibility for oversight activities (including inspections, surveys, and reporting), until recently, its performance of these activities was limited. It has now completed its first governmentwide records management self-assessment survey, resumed agency inspections after a long gap, and increased its reporting. These new efforts have provided NARA with a fuller picture of governmentwide records management, including an assessment by agency of the risk of unauthorized destruction of federal records; as a result, it is in a better position to determine where records management improvements are most needed, develop and update guidance, and hold agencies accountable by publishing assessments of their records management programs. NARA plans to use these oversight activities to develop baselines against which to assess future progress; however, it has not yet developed plans for adequately validating self-reported data or targeting inspections of agency records and records management programs to achieve governmentwide results. As NARA continues to build its oversight program, such activities will be important to provide assurance that reported changes from baseline scores reasonably reflect actual performance. NARA also provides oversight through its appraisal and scheduling work with agencies, in which it appraises agency records for their permanent value (among other things) and reviews and approves agency disposition schedules, in accordance with the Federal Records Act. Following an extended effort to get agencies to submit schedules for unscheduled systems containing electronic records, NARA has increased the number of schedules it has approved per year, but nevertheless has an increased backlog of schedules awaiting approval. NARA faces the risk that its success in getting agencies to schedule their systems may result in more schedules being submitted than it can handle in a timely manner. Unless NARA assesses this risk and develops appropriate mitigation plans, the backlog may increasingly hinder agencies’ records management. Although NARA activities alone cannot solve the persistent problems facing federal records management (agency heads are responsible for their agencies’ records and records management), building and improving on NARA’s oversight activities could help both NARA and agencies more effectively focus resources on areas needing improvement. Oversight addresses whether organizations are carrying out their responsibilities and serves to detect other shortcomings. Our reports emphasize the importance of effective oversight of government operations by individual agency management, by agencies having governmentwide oversight responsibilities, and by the Congress. Various functions and activities may be part of oversight, including monitoring, evaluating, and reporting on the performance of organizations and their management and holding them accountable for results. The Federal Records Act gave NARA responsibility for oversight of agency records management programs by, among other functions, making it responsible for conducting inspections or surveys of agencies’ records and records management programs and practices; conducting records management studies; and reporting the results of these activities to the Congress and OMB. Consequently, as our previous work pointed out, it is important for NARA to have a governmentwide picture of the state of federal records management programs to help it to hold agencies accountable, as well as to determine areas where its guidance needs strengthening. NARA has recently undertaken efforts to gather governmentwide information to help it assess the status of federal records management and risks of unauthorized disposition (including destruction) of records. In September 2009, NARA sent the first of a promised series of annual mandatory records management self-assessment surveys to 242 federal records officers from cabinet-level agencies, agency components, and independent agencies; the survey’s goal was to determine how effectively agencies were meeting statutory and regulatory requirements for records management. Agencies were asked 34 questions designed to obtain basic information about agencies’ records management programs in five areas: program management, records disposition, vital records, electronic records, and e-mail records. NARA used the data collected to categorize agencies according to the level of risk to records associated with the state of agencies’ records management programs. According to NARA, ineffective records management programs are the most significant indicators of risk of unauthorized disposition of records. NARA’s report on the self-assessment survey, released in April 2010, described strengths and weaknesses in agencies’ records management programs. It concluded that almost 80 percent of agencies were at moderate or high risk of improper destruction of records; that is, the risk that permanent records will be lost or destroyed before they can be transferred to NARA for archiving or that other records will be lost while they are still needed for government operations or legal obligations. In particular, of the 220 (91 percent) federal agencies and components that responded, 36 percent were at high risk in their records management programs and 43 percent were at moderate risk. Overall, only 21 percent of federal agencies and components responding were at low risk. For electronic records, 39 percent were at high risk, and for e-mail, 48 percent were at high risk. The Archivist referred to these results as “alarming” and “worrisome”; in a subsequent oversight hearing, the director of NARA’s Modern Records Program testified that the findings were “troubling” and “unacceptable.” NARA has also obtained governmentwide information on one facet of records management—electronic records scheduling—through its efforts to ensure that electronic systems holding records are scheduled. NARA has periodically requested agencies to provide summary reports documenting their progress. By September 30, 2009, NARA had received electronic records scheduling reports from 160 of 240 federal agencies or components for which it had been tracking electronic records scheduling, for a 67 percent response rate. In June 2010, it summarized the results of these reports. NARA determined that 25 percent were in the moderate to high risk category for failing to schedule 90 percent or more of their electronic records. Forty-two percent were rated low risk, and 33 percent did not respond to NARA’s request for information. These information-gathering efforts are important means for helping to assess federal records management and risks to records. The 2009 self- assessment in particular provided NARA useful oversight information, including a broad picture of governmentwide records management that was not previously available. The self-assessment survey adds to NARA’s ability to assess the risk of unauthorized destruction of records by increasing its broad knowledge of the status of agency records management programs; previously, although NARA’s regular work with agencies on scheduling and disposition of records provided it insight into agencies’ activities at the end of the records life cycle, NARA officials agreed that its insight into records management at earlier stages—that is, creation, maintenance, and use—had been more limited. In addition, NARA’s work conducting the self-assessment survey raised issues for further study, such as the role of the departmental records officer and appropriate level of records management staffing. It also provided NARA with important operational experience to apply to improving further surveys. According to the survey report, the results of the first few self-assessments will provide a baseline for records management in the federal government and, along with findings from agency inspections and records management studies, will allow it to assess more thoroughly records management within individual agencies and throughout the federal government. NARA has committed to conducting the self-assessment survey annually, and it conducted a second survey in May and June 2010 (at the time of our review, NARA had not yet published the results of this second survey). In addition, NARA has increased its reporting on the results of its oversight activities. In the past, NARA had been reluctant to report negative news about individual agencies’ records management, which we attributed to an organizational preference for using persuasion and cooperation when working with agencies. We noted that this reduced its ability to hold agencies accountable. In contrast, this year NARA reported fully on the results (both negative and positive) of its self-assessment survey and its electronic records systems scheduling project. The current reports not only provide summary results and analysis, they also list each agency or component’s results individually. Also, besides sending the self- assessment report to the Congress and posting it on its Web site, NARA made efforts to publicize the results, including announcements through a press release, on Twitter, and on two blogs: those of the Archivist and of NARA’s National Records Management Program. Further, the evaluation sections in NARA’s performance and accountability reports for fiscal years 2008 and 2009 were more extensive than in 2007, and included sections on challenges and risks. These sections discuss specific agencies where NARA identified significant records management program risks and cases of alleged unauthorized disposition of federal records. By widely reporting results by agency, NARA has taken an important step toward improving the visibility of records management to senior agency managers, the Congress, and the public, potentially raising the priority that agencies assign to records management. Although instituting and reporting fully on annual self-assessment surveys is a positive step, the initial survey had limitations. In its report, NARA identified issues that it considered to affect the reliability and usefulness of the data in its first self-assessment survey. Specifically: Not all regulated agencies were covered. According to NARA, its list of agency records contacts, to whom it sent the surveys, was not always accurate: the distribution list was incomplete, and some people included on the list were not responsible for their agencies’ records management programs. In addition, NARA reported that some agencies did not return surveys because they did not have an assigned records management officer responsible for completing the task, they believed they were not required to respond, or for other reasons, including inadvertent oversight. Another issue involves the roles of departmental and component-level records officers. According NARA officials, scoring was affected by issues related to NARA’s level of knowledge of the responsibilities of each department-level records officer. According to NARA, agencies of comparable size and complexity might have one records officer answering on behalf of the organization, or several component-level records officers answering for each component. Some departments do not have a departmental records officer. Some department-level records officers responded for the entire organization, but, in at least one case, the department-level contact did not respond and deferred to the components. Some questions were unclear or inapplicable. Although NARA ran a focus group and pilot test to obtain feedback on the survey questions, it reported that responses and comments indicated that the wording of some questions was unclear. For instance, some respondents answered “no” when their comments indicated they should have answered “yes,” and vice versa. In addition, NARA found that some questions on the survey were not applicable to very small organizations (less than 100 FTEs), but a “no” answer reduced their score, so that these organizations were penalized inappropriately in the scoring. NARA has taken steps to reduce the effect of these issues on the second survey. According to NARA officials, they gathered additional information on records contacts, including the areas covered by departmental records officers, to ensure that the survey went to the appropriate contacts and that NARA understood those contacts’ areas of responsibility. In addition, the second survey included numerous definitions and revisions that, according to officials, were intended to clarify the survey. Another important limitation of the surveys as assessment tools is their reliance on unvalidated self-reported data. In the first survey NARA did little validation of agencies’ self-reporting. According to NARA, its appraisal archivists reviewed the information provided by the agencies, and their comments were incorporated into the analysis. However, according to the survey report, NARA otherwise “took agencies at their word and did not attempt to verify submissions.” For the second self- assessment survey (conducted in May–June 2010), NARA asked respondents each to supply one item of documentation—the records management directive or directives issued by the department—as validation. According to officials, they expected to use these directives not only to validate certain responses, but also for analysis of features such as the age of the directives and extent they covered the requirements of the C.F.R. Officials also told us they are considering asking for additional documentation in future years, such as training curricula and internal evaluation reports. An official said that besides examining the directives, they validated answers for 5 of the 55 questions by comparing them with records schedule information that they track in-house. As NARA continues to perform self-assessment surveys, it will be important for it to be assured that improvement (or deterioration) in governmentwide and agency scores reasonably reflect actual performance. Accordingly, validation of the results will be important both for the broad assessment of federal records management and for the assessment of individual agencies and programs. However, NARA has not yet developed plans to use other means of validating responses, such as doing followup interviews with respondents, requesting additional supporting documents, or including questions in the survey on how response data were collected. Without additional validation, confidence in the validity of the survey results may be reduced, and they may be less effective for their intended purposes. According to NARA’s strategic records management plan, the agency is to conduct records management studies to focus on cross-government issues, to identify and analyze best practices, and to develop governmentwide recommendations and guidance. For example, NARA planned to undertake such studies when it believed an agency or agencies in a specific line of business were using records management practices that could benefit the rest of a specific line of business or the federal government as a whole. Since we last reported in 2008, NARA has conducted three records management studies (see table 3). In accordance with its plans, all of these studies are focused on records management issues with wide application. In particular, flexible scheduling (the first study in table 3) is a relatively recent approach that allows so-called “big bucket” or large aggregation schedules (that is, a single schedule would cover all records relating to a work process, group of work processes, or a broad program area to which the same retention time would be applied). According to officials, NARA used the results of its study to update its original 2005 bulletin on this topic; the update was released in May 2010. In contrast, the second study in table 3 (on recordkeeping technologies) was not used to feed into guidance, but to provide helpful information to agencies: in particular, on other agencies’ experience with implementing records management software and, in one case, with e-mail archiving software being implemented at the Office of the Secretary of Defense. The third study (on Web 2.0 use) was released in September 2010; according to NARA, the study enabled the agency to identify recommendations for future actions (such as clarifying the definition of a federal record and integrating records management into agency social media policy). NARA recently shifted its efforts from performing studies to conducting inspections. From 2005 to 2008, NARA set objectives for and performed one or two records management studies in each fiscal year. For 2010 and 2011, NARA’s objectives did not include performing studies, but instead it set an objective to perform one inspection in each year. According to NARA officials, at the time the 2011 plan was developed, they had not determined whether to conduct a study in that fiscal year or do additional inspections; since then, they have decided to do additional inspections. NARA’s plan to perform records management inspections in fiscal years 2010 and 2011 reflects a resumption of NARA’s agency inspection program in response to our 2008 recommendation. NARA describes its new program of inspections as taking a “targeted” approach, focusing on particular aspects of records management at a given agency or agencies. (In contrast, its previous approach involved more comprehensive reviews of agency records programs.) To set up the inspection program, NARA took a number of steps: it revised its requirements, developed criteria for doing inspections, chose its first inspection targets, and has begun performing inspections. NARA added a new section of its regulations that defines the conditions under which it may undertake an inspection and how the inspection will be initiated and carried out with the agency. Under the final rule, published on October 2, 2009, “inspection” is defined as a formal review and report by NARA of an agency’s recordkeeping processes that focus on significant records management problems affecting records at risk that meet one or more of the following criteria: (1) they have a direct and high impact on legal rights or government accountability; (2) they are the subject of high-profile litigation, congressional attention, or widespread media coverage; (3) they have high research potential; or (4) they are permanent records with a large volume, regardless of format. For fiscal year 2010, NARA chose to perform two inspections, both at the Department of Defense: an inspection at the Office of the Secretary of Defense (OSD) looking at three aspects of records management and one at the National Geospatial-Intelligence Agency (NGA). According to a NARA official, the inspection at the NGA, planned to begin in 2010, will not be completed until 2011. Table 4 describes these inspections. At the time of our engagement, NARA had not issued planned reports on the results of the OSD inspections, and, according to an agency official, planning for the inspection at NGA was still in an early stage. Resuming inspections is an important step, because inspections provide information in more detail on actual performance and particular records issues; however, the inspection program currently planned is limited. NARA’s plan commits to conducting one inspection per year for fiscal years 2010 and 2011, and officials told us they were considering one or two inspections per year. According to one NARA official, NARA is unwilling to commit to more inspections because it has other priorities for the appraisal archivist staff, which also performs appraisals and scheduling, conducts studies, and works on the self-assessment survey and other special projects. (According to a NARA records management official, of the 100 staff of the National Records Management Program, approximately 50 are appraisal archivists, who work with about 245 agencies or components.) NARA’s planning for its inspection program is at a high level and is still under development. According to its 2009 methodology for self- assessments and inspections, NARA will develop a list of agency inspection targets based partially on the self-assessment results, develop inspection plans, and conduct inspections. The plan identifies 20 conditions that the records management staff is to consider to help determine whether an agency may be an inspection candidate. (Examples include nonresponses to NARA’s surveys, out-of-date agency records management manuals, requests from an agency head, and unresolved unauthorized destructions.) These are conditions that management is to assess in making inspection decisions—a checklist of risk factors to consider. According to agency officials, they expect to flesh out the inspection methodology as they gain experience doing their first targeted inspections in fiscal years 2010 and 2011. However, the 2009 methodology does not define how NARA will systematically target and leverage a limited number of inspections to help achieve governmentwide results (one of its stated goals). For example, the agency has not yet described how it might distribute inspections among agencies or what topics it would like to cover over a period of years. Similarly, although NARA plans to perform multi-agency inspections, it has not yet developed plans to do so by, for example, defining key practices and determining how to inspect these at multiple sites. NARA’s plan states that self-assessment and inspection activities will help NARA monitor how federal agencies manage their records, but it does not address how one or two inspections a year would provide effective monitoring or how it would best organize a limited number of inspections to accomplish this goal. NARA activities alone cannot solve the persistent problems facing federal records management, since agency heads are responsible for their agencies’ records and records management, as well as for allocating resources to these, and as we have pointed out in the past, records management has historically received low priority. Nonetheless, shedding light on the status of governmentwide records management can help focus both agencies and NARA on areas needing improvement and the need to devote resources to these areas. NARA’s renewed inspection program has the potential not only to help motivate agencies to improve records management, but also to contribute to the systematic collection of comprehensive information for assessing progress. However, NARA’s plans do not explain how it will systematically target and leverage a limited number of inspections to help achieve its goals. Until it has a more fully developed inspection methodology, NARA risks reducing the potential effectiveness of inspections for improving records management. NARA oversees retention and disposition of all federal records through records appraisal and approval of agency records schedules, processes that are crucial both to the management of records in agency custody and to the eventual preservation of permanent records in NARA’s custody. NARA’s authority to approve disposition schedules provides it with the ability to ensure that such schedules conform to its regulations. NARA has identified unscheduled records as an important indicator of the risk of unauthorized destruction of records. NARA has been making efforts to get agencies to schedule unscheduled records in computer systems. This project, which it refers to as the Electronic Records Project, was initiated in response to the requirements of the E-Government Act. In December 2005 NARA issued a bulletin requiring agencies by September 30, 2009, to submit records schedules to NARA for all electronic systems of records existing as of December 17, 2005, and it has periodically requested agencies to provide summary reports documenting their progress. After the 2009 deadline expired, NARA issued, in February 2010, an additional bulletin reminding agencies of their continuing responsibility to schedule all their electronic records series and systems and requiring them to report semiannually on the status of their electronic records scheduling activities. In fiscal year 2009, the number of schedules received jumped from 549 the previous year to 974, which NARA attributes to its efforts to hold agencies to the September 2009 deadline. Although this result is encouraging, it also led to an increase in NARA’s backlog of schedules to be processed and approved. The number of schedules submitted increased by more than 400, while the number closed out increased by only about 100, from 402 to 501; as a result, the approval backlog increased from 575 to 1048. NARA’s processing capacity reached a high of 501 schedules closed out in 2009, so the current backlog represents about 2 years’ work for appraisal staff, assuming that that they go on approving schedules at the current rate, and that no other schedules are submitted in the meantime. NARA’s schedule approval process occurs in four steps (see table 5) involving professional analysis and judgment, as well as input from the public. NARA estimates that it generally takes approximately 6 months or less to process simple schedules for records that are clearly temporary and do not have legal rights implications, with almost 4 months of this time period taken by the public comment process. It may take up to a year for NARA to process large and complex schedules requiring closer review or eliciting critical public comments. According to NARA, the median time for it to approve a schedule has historically been about 300 calendar days. NARA is trying to shorten its approval process and increase its capacity to process schedules despite limited resources. It has about 100 people working on records management, about half of whom work on appraisal and scheduling. The director of the Lifecycle Management Division (within Modern Records Programs) said the records management program aimed to increase approvals of records schedules covering electronic records by about 10 percent a year, and that staff are working on streamlining the scheduling process. For example, the program has reduced the time allowed for public comment by 15 days, and staff are using e-mail rather than letters to communicate and send responses to requests from Federal Register requesters. The official also described future projects, including a project to improve the scheduling and appraisal workflow and a large project to revise the General Records Schedules, which should have the effect of reducing the number of schedules that agencies will have to submit, thereby improving NARA’s ability to keep up with other submitted schedules. Although these efforts may help to streamline the process, the number of records and systems that remain to be scheduled is likely very large. Currently, the Director of Lifecycle Management said that they are unable to estimate “the universe of electronic records,” although they are confident that it is a “big number.” For example, in 2006, NARA requested information from agencies on outstanding systems to be scheduled and received answers from 54 agencies, for a total of about 8,500 systems. Other indications also support the conclusion that many records remain unscheduled. For example, NARA’s electronic records summary report indicated that many electronic records were still unscheduled. As of September 30, 2009, based on 240 federal agencies and components: For 14 percent of agencies, schedules were submitted for 59 percent or fewer of their e-records (these were characterized as high risk). For 11 percent of agencies, schedules were submitted for 60 to 89 percent of their e-records (these were characterized as moderate risk). For 42 percent of agencies, schedules were submitted for 90 percent of more of their e-records (these were characterized as low risk). 33 percent of agencies did not submit reports. Further, NARA’s agency self-assessment indicated that 27 percent of agencies responding had scheduled fewer than half of their electronic systems. The survey did not ask for the numbers of systems remaining unscheduled; however, NARA’s February 2010 bulletin requires agencies to list unscheduled systems. As agencies comply with this requirement, such lists should provide a basis for a better estimate. NARA thus faces the risk that if its efforts to get agencies to submit schedules for outstanding agency systems continue to be successful, it will be unprepared to deal with the workload. The jump in its backlog associated with the 2009 deadline for scheduling electronic systems suggests that this is a real concern. NARA has acknowledged that in light of the volume and complexity of electronic records increasing each year, keeping pace with the requirements to schedule all existing electronic records is a continuing challenge for both NARA and agencies. However, it has not assessed the risk that it may be unable to keep up with schedules submitted, nor has it developed plans to mitigate that risk. Unless it does so, the risk increases that the backlog may increasingly hinder agencies’ records management—for example, they may be required to retain records unnecessarily that they would otherwise be authorized to dispose of, and they may be delayed in transferring permanent records to NARA. NARA faces challenges in preserving permanent records in its possession—both paper and electronic—largely because of the sheer volume of federal records, the finite resources available to deal with them, and the technological challenges posed by electronic records. NARA has a large and persistent backlog of records in paper and other media needing preservation actions. It has developed priorities for preserving physical records based on factors such as their demand and condition, but it does not foresee being able to accomplish those priorities. As a result, large numbers of physical records requiring preservation remain at risk. In addition, as we have previously reported, its development of the Electronic Records Archives is still ongoing, including the development of a preservation framework for electronic records. Until ERA and its electronic preservation capabilities are fully implemented, there is reduced assurance that NARA can ensure the preservation of all electronic records. According to NARA, preservation encompasses the activities that prolong the usable life of archival records. Preservation activities are designed to minimize the physical and chemical deterioration of records and to prevent the loss of informational content. For physical records, an important part of preservation is holdings maintenance, defined as those preservation actions that are designed to prolong the useful life of records and to reduce or defer the need for laboratory treatment by improving the physical storage environment. These actions include replacing acidic storage materials such as file folders with materials of known quality that meet NARA specifications, improving shelving practices, removing damaging fasteners, and reproducing unstable materials such as Thermofax copies onto stable replacement materials. In addition, preservation may involve removing fragile records from use by capturing the information in a new format. NARA may duplicate motion picture film, still photos, microfilm, and sound and video recordings; reformat audio and video recordings that are in formats that cannot be used on currently available playback equipment; and generate digital images of records. NARA’s approach to preserving physical records and media (such as paper records, videotapes, microfilm, maps, charts, and artifacts) is to examine holdings to assess their preservation needs; provide storage conditions that retard deterioration; and treat, duplicate, or reformat records at high risk for loss or deterioration (for example, film and microfilm, audio recordings that require obsolete equipment, videos, brittle and damaged paper records, and motion pictures). Some of the factors influencing the rate at which NARA performs maintenance and preservation treatments, according to the agency, include large accessions of at-risk records, increased demand for the digitization of records, and high public interest. The number of physical records requiring some degree of preservation activity has led to a backlog in preservation actions. In 2009, NARA reported that 65 percent of its archival holdings were in need of some preservation action. NARA defined “in need of preservation action” as that there was an imminent threat to the record, and the information it contained could not be accessed due to condition. According to NARA, it was conservative in this assessment, focusing on whether records could be safely served to researchers in their existing state and housing. For example, even though poor quality, chemically unstable boxes are not ideal for archival preservation, they would not be considered to require preservation action (that is, replacement); however, a box that did not adequately support the records would, since records that are not well supported are likely to be damaged. NARA has set a strategic goal of reducing the backlog to 50 percent by 2016. However, there is little assurance the goal will be met. The 65 percent figure has remained almost constant since NARA established a baseline. In fiscal year 2009, NARA reported treating nearly 116,000 cubic feet of at-risk archival records. Nonetheless, as additional records were accessioned, the percentage of backlog remained essentially constant, and the actual amount of holdings requiring preservation action grew from about 2.4 million cubic feet in 2008 to about 2.6 million cubic feet in 2009. Further, in its 2008 preservation plan for nontextual holdings (maps, photos, audio, and video), NARA noted that 54,000 cubic feet of these records needed preservation actions, and that even addressing only the highest priority items (about 25 percent) exceeded its staff resource capabilities. Over the past year, NARA increased the staff performing holdings maintenance (preservation activities such as rehousing at-risk materials) from about 8 to 18 dedicated staff (according to officials, some additional staff also devote part of their time to holdings maintenance). Nonetheless, there is little assurance that NARA will be able to meet its goal of reducing the backlog to 50 percent by 2016, and in any case, large numbers of permanent records will remain at risk for the foreseeable future. Preservation of electronic records presents significant challenges because these records are stored in specific formats and cannot be read without software and hardware—sometimes the specific types of hardware and software on which they were created. This hardware and software can vary not only by type but by generation of technology: the mainframe, the personal computer, and the Internet. Each generation of technology has brought in new systems and capabilities, and over time, hardware, software, storage media, and file formats become obsolete. NARA is still developing the means to preserve permanent electronic records—the Electronic Records Archives. ERA achieved initial operational capability of the first two phases of the five-phase development, but the preservation component of the project is still being developed. The former CIO told us that in her view, the preservation module would be the most difficult to implement of the system’s functional areas. This module is to enable secure and reliable storage of files in formats in which they were received, as well as creating backup copies for off-site storage. Part of the preservation challenge is developing ways to ensure access to the most important formats, which NARA intends to do through various means, such as encouraging the use of sustainable formats and using viewers and transformation. According to the conceptual framework for ERA preservation, NARA will encourage creating entities to transfer records to NARA using sustainable formats—that is, formats that are relatively resistant to obsolescence and can reasonably be expected to be usable for some period of time. However, the framework recognizes that in many cases, this will not happen, and so ERA will need to ingest a broad range of formats. After files are ingested, the intent is to transform records as necessary into formats that are sustainable. Since transformation is the most expensive strategy for providing access to records, NARA plans to consider transforming electronic records only if the data file format is at risk of obsolescence, users require an enhanced level of access, or both. NARA expects to be able to preserve “all the bits” of electronic files that it accessions, but it will not be able to guarantee that all formats will be immediately or permanently accessible. Further, we recently reported that NARA planned to begin development of ERA’s preservation framework in 2010 and complete it in 2011, but that the plan did not contain specific dates for completion or identify the associated capabilities that are to be delivered. As a result, we expressed doubt that the completed ERA system would be delivered by 2012 with the originally envisioned capabilities. Most recently, ERA has been listed by OMB as a high-priority project that has the potential for faster, smarter implementation. Properly implementing our outstanding recommendations related to strengthening requirements management and earned value management could help the ERA project meet its performance goals within reasonable funding and time constraints. Until ERA and its preservation module are complete, it will remain uncertain whether NARA will be able to effectively preserve all permanent electronic records in such a way that the information is accessible. NARA’s policies and procedures for key aspects of governance, human capital, and collaboration are generally aligned with its strategic planning, but selected areas have gaps. With regard to governance policies and procedures, NARA has defined and delegated areas of authority and responsibility that are generally aligned with its strategic plan, but it is not managing risk at the enterprise level. In addition, it has developed a strategic human capital plan that is consistent with our human capital strategic framework, but its implementation of the plan has been delayed, so that the agency is not yet managing human capital strategically. To its credit, NARA is taking advantage of numerous collaboration opportunities, which are generally aligned with the goals and strategies in its strategic plan. If NARA addresses the identified gaps in governance and human capital, it will be better positioned to achieve its goals. We have previously described governance as the process of providing leadership, direction, and accountability in fulfilling an organization’s mission, meeting objectives, and establishing clear lines of responsibility for results. Further, our prior work has established that enterprisewide risk assessment and management is a key part of governance. Strategic planning and management can help agencies effectively manage resources and fulfill their missions, and, since the mid-1990s, we have reported on leading practices for effective strategic planning and management, including establishing long-term goals, identifying and developing strategies to address key management challenges, and aligning resources and activities to agency goals. NARA has a strategic plan and a process for aligning its organization and lines of responsibility to support its goals. The agency’s recently updated strategic plan governs its activities until 2016 and details six strategic goals (see table 6) and 46 specific strategies it will use to achieve these goals. Specific strategies support each of these goals. These strategies include, for example, “we will continue to make the business case at senior levels throughout the Federal Government that records and information are important Government assets and that records management is an important tool,” “we will ensure that all of our holdings are in appropriate space,” “we will identify permanently valuable electronic records wherever they are, capture them, and make them available in usable form as quickly as practical,” and “we will identify and implement the cultural changes that we need to better serve our customers in a changing environment.” NARA has also established policies and procedures that define its organization and determine lines of authority and areas of responsibility. Responsibilities at the agency are approved by the Archivist through a defined process and are codified, along with the change process, in NARA Directive 101: NARA Organization and Delegation of Authority. For tasks that cut across organizational structures, NARA has procedures for setting up committees, task forces, and working groups, which are governed by charters establishing their goals and membership. It has a directive governing creation of these charters. Generally, NARA’s organization and lines of responsibility were aligned with its strategic plan. Of 21 specific strategies that we examined, 17 were under clearly documented lines of authority and were assigned to appropriate offices by the agency’s policies, with 4 strategies lacking clearly documented lines of responsibility (see table 7). According to NARA officials, they did not consider that some of these strategies required specific assignments of responsibility, either because they were global responsibilities or because they were good business practices. However, clear statements of responsibility are important for implementing these strategies. We have previously reported that a dedicated implementation team assisted by supporting teams, such as functional or crosscutting teams, is a key practice in implementing cultural transformations. Assigning responsibility for these strategies to appropriate offices would help to provide assurance that they are appropriately carried out. Enterprisewide risks are those that would threaten an organization’s ability to carry out its mission, such as an act of terrorism, loss or compromise of critical information (such as classified or personally identifiable information), or a natural disaster. Risk management is the continuous process of assessing such risks, reducing the potential that an adverse event will occur, and putting steps in place to deal with any event that does occur. Without an effective program of risk assessment and internal control, management may have less assurance that it is using organizational resources effectively and efficiently, or that agency assets and operations are protected. As our previous work has shown, and as called for by the Standards for Internal Control in the Federal Government, agencies should continuously and systematically monitor their internal and external environments to anticipate future challenges and avoid potential crises. GAO has developed a framework for risk management (see figure 3) that identifies five major phases: (1) setting strategic goals and objectives, and determining constraints; (2) assessing the risks; (3) evaluating alternatives for addressing these risks; (4) selecting the appropriate alternatives; and (5) implementing the alternatives and monitoring the progress made and results achieved. Our work has shown that decisions for enterprisewide risk management should be made in the context of an organization’s strategic plan, and organizations should have risk planning documents that address risk- related issues that are central to the organization’s mission. According to NARA program officials, NARA currently performs risk management for the ERA project, its major system investment. The agency manages ERA’s risks using an agency-level risk review board, a program- level risk review board, and a technical risk review team. In addition, officials stated that the ERA program office produces monthly reports that include top identified risks and specify associated mitigation strategies. Risk status is communicated to senior NARA management and OMB on a monthly basis and Congress on a quarterly basis. The project uses an automated tool to track and manage risk. However, although NARA has identified important risks facing the agency, it currently has no dedicated active function to manage these risks at the enterprise level. Some risks of which NARA is aware include technological change causing record formats to become obsolete and unreadable, failure of the ERA project, and effects of climate change or natural disasters (such as on continuity of operation, preservation requirements, locations of facilities, and energy use). According to NARA officials, the organization had a risk review board, which existed for about 2 years, but it became inactive. This occurred because the board’s discussion of risk tended to focus either on project and program risks or highly generic risks. NARA officials told us that the agency has also relied on a work group of senior executives, the Lifecycle Guidance Team, to address enterprisewide risks. However, as currently established, the Lifecycle Guidance Team does not explicitly focus on enterprise risk management. The members of this team, chaired by the Deputy Archivist, are members of NARA’s senior staff. However, although the team is at an appropriate level of seniority to address enterprise risk management, this function is not part of its charter. According to the charter, the group focuses on ensuring that NARA’s records lifecycle initiatives are effectively coordinated, integrated, and implemented agencywide, and it provides leadership and oversight to initiatives to advance the agency’s mission and strategic goals and improve records, information, and knowledge management governmentwide. Among the initiatives it is reviewing or has reviewed are systems in operation or in development, for which project risks have been discussed. However, these risks are not enterprise risks. According to NARA officials, the development of a new process and system for internal control has recently been proposed. The process and system, to be based on a similar system at the Library of Congress, are intended to automate internal controls and would include assessment and categorization of risk on the functional level. According to the officials, such an implementation would benefit NARA’s risk management and internal control capabilities. This proposal has been reviewed by senior management, but is still in the first stages of planning and does not yet include a clear picture of which divisions will be responsible for dealing with strategic risks. At the time of our report, agency officials also acknowledged that NARA has neither completed a time frame for implementation nor established an estimated finish date. Unless NARA begins to manage its enterprise risks on a continuous basis, there is a greater likelihood that serious threats to NARA may not be addressed. The success of any organization depends on effectively leveraging people, processes, and tools to achieve defined outcomes and results. For people to be effectively leveraged, they must be treated as strategic assets. An agency’s strategic human capital plan establishes an agencywide vision that guides workforce planning and investment activities. As our previous work has shown, a strategic approach to human capital management enables an organization to be aware of and prepared for its current and future human capital needs, such as workforce size, knowledge, skills, and training. Sound human capital strategic planning provides the essential context for making sensible, fact-based choices about designing, implementing, and evaluating human capital approaches. It is critical to ensuring that agencies have the talent and skill mix they need to address their current and emerging human capital challenges. Our research shows that to be effective, a strategic approach should use data-driven methods to (1) assess the knowledge and skills needed; (2) inventory existing staff knowledge and skills; (3) forecast the knowledge and skills needed over time; (4) analyze the gaps in capabilities between the existing staff and future workforce needs, including consideration of evolving program and succession needs caused by turnover and retirement; and (5) formulate strategies for filling expected gaps, including training and additional hiring. (Figure 4 is an overview of verview of this process.) this process.) Inventory of existing workforce capabilities In August 2009, NARA published its first Strategic Human Capital Plan, which covers fiscal years 2009 through 2014. Linked to the agency’s overall strategic plan, this strategic plan discusses strategies for achieving each of its five human capital goals: strategic alignment, leadership and knowledge management, results-oriented performance culture, talent management, and accountability. The strategic plan includes a set of improvements that would give NARA the capability to strategically manage its human capital, as called for in our human capital framework. Specifically, section 3, “Workforce Planning,” includes all five of the elements of our strategic human capital framework. The plan also includes related goals, such as being able to hire people faster by automating manual and paper-based processes; NARA’s Director of Human Resources cited this as one of the agency’s highest priorities. The agency has taken some initial steps to implement its plan. For example, it has completed a pilot of a competency development approach in which it modeled the competencies required for all positions in the Modern Records Program and in the Information Security Oversight Office; it is currently assessing the accuracy and effectiveness of the competency models developed under the pilot. Modeling competencies— determining the skills needed for specific positions—is a key tool for determining what skills NARA will need to meet organizational goals. Once competency modeling is completed, NARA will be in a position to forecast future workforce skills needs. It is also currently finalizing guidance for workforce planning, another part of the process of forecasting future workforce needs. Finally, it has completed a pilot for an online training needs assessment tool. The results are currently being analyzed, and a report is tentatively scheduled for September. However, NARA is falling behind in the implementation of its human capital management milestones. For fiscal year 2010, NARA set 72 milestones for implementing the strategic plan. However, as of the end of the third quarter, 23 milestones had been met, 14 were missed, 3 future milestones were pushed further back, and 8 had other weaknesses, such as lacking a specific date or status update. Of the remaining 24, some are not due yet, and some were periodic actions with no single due date (see figure 5). An example of a missed milestone specifically related to our strategic human capital framework is the development of an agencywide workforce plan that includes a hiring projections worksheet. In addition, NARA has not completed an inventory of existing workforce skills. Without a complete skills needs forecast and a current skills inventory, it cannot perform a gap analysis and consequently cannot plan future human capital initiatives, three of the steps of our strategic human capital framework. The agency’s human capital officials stated that milestones had been missed or pushed back because they had to address other priorities, including realigning staff to address the requirement to comply with the Office of Personnel Management’s (OPM) 2009 Hiring Reform and the May 2010 Presidential Hiring Reform Initiatives. To its credit, in responding to these initiatives, NARA is making progress on addressing its hiring process. Hiring is an important part of strategic human capital management, since hiring is a critical tool for addressing skills gaps. According to NARA, as of September 2009, it had an overall average time to fill a position of 163 to 213 days. In contrast, the model set up by OPM called for an 80-day hiring process. Responding to OPM’s 2009 Hiring Reform, NARA’s Hiring Process Action Plan (submitted to OPM in December 2009) identified primary barriers to a timely and effective hiring process. According to the plan, an important barrier was the agency’s paper-based application process, including manual routing of forms. During NARA’s hiring process, its staff manually printed, reviewed, and annotated hundreds of applications at several stages in order to verify and screen applications. The agency also experienced a 45 to 90 day backlog of hiring actions, which NARA officials attributed largely to reliance on paper-based hiring systems. NARA determined that without an automated staffing system to screen applications, addressing the other barriers identified in its action plan would have only a marginal impact on its overall time to fill a position. To address this barrier, NARA piloted and implemented automated hiring software, called USA Staffing, provided by OPM. This Web-based system automates the recruitment, assessment, referral, and notification processes, reducing the degree of human intervention required in the hiring process. According to NARA, the agency implemented the USA Staffing tool in May 2010, and as of July 2010, it reported reducing the average time-to-hire to 126.5 days. Agency human capital officials told us they believed further reductions were likely as the staff becomes more familiar with the new tool. NARA continues to work on its hiring process in response to the May 2010 Presidential Hiring Reform Initiative, which set a deadline of November 1, 2010, for federal agencies to adopt certain streamlined hiring procedures, including eliminating knowledge, skills, and abilities (KSA) essays, allowing applicants to apply using resumes and cover letters, and involving the managers and supervisors responsible for hiring in the complete hiring process. Agency human capital officials believe that these improvements will result in a more efficient hiring process. However, the reforms under the Presidential Initiative are still ongoing. Further, because the adoption of the new staffing system is still recent, it is not possible to fully evaluate its impact. In addition, for the hiring process to be fully effective, it is important that NARA implement its strategic human capital plan and particularly that it complete its skills needs analysis and gap analysis. Such analyses are crucial for effectively determining hiring needs; they are also important for helping determine how to allocate personnel to mission areas in which NARA has identified resource-related backlogs, such as records management and preservation. Until NARA completes these analyses, there is no assurance that the agency will be able to manage its human capital strategically, ensuring that it has staff with the right competencies to perform its mission now and in the future. As a small agency with a broad mission, NARA has stressed the importance of collaborative efforts in achieving the organization’s goals. In his preface to the agency’s strategic plan, the Archivist refers to the importance of involvement with the archival and records management communities as well as other stakeholders, stating that partnerships at all levels of the organization will add depth and richness to NARA’s programs and initiatives. The strategic plan itself emphasizes collaboration and partnering. It spells out six strategic goals, for each of which the plan describes a number of specific strategies. For five of the six goals in the strategic plan, either one or two of the specific strategies are directly related to collaboration and partnership; table 8 shows the specific strategies associated with each strategic goal. NARA has established or begun to establish collaborative efforts that are generally aligned with all these goals and strategies. For example, for the third strategic goal, which focuses specifically on electronic records, the specific strategy related to collaboration is to partner with agencies, research institutions, and private industry to develop, implement, manage, and promote NARA’s electronic records program. According to its workplan, multiple efforts in this area are planned to be managed by NARA’s Center for Advanced Systems and Technologies (NCAST) within the Office of Information Services. NCAST serves as lead for collaborating in information technology research and development with governmentwide, interagency, professional, and academic organizations. Among other things, NCAST is to organize, sponsor, and participate in research in computer science, archival science, and related technologies capable of improving the lifecycle management of records. In support of this responsibility, NCAST has both planned and completed collaborative research agreements with partners that include federal organizations and consortia, industry, and academia, including the National Science Foundation. For example, NCAST has worked with the Records Management Services Working Group of the Object Management Group (OMG), an international, open membership, not-for-profit standards-setting consortium of the computer industry and other information technology organizations, on the creation of new software specifications for records management. The working group identified and documented key requirements for records management functionalityin systems that manage electronic records. NCAST is also a member of th e Networking and Information Technology Research and Development Subcommittee of the National Science and Technology Council’s This organization is a collaborative effort of Committee on Technology. more than a dozen federal research and development agencies that fund research in advanced information technologies such as computing, networking, and software. Examples of other collaboration efforts related to this strategy include two other groups. One of these is the NARA-chaired Advisory Committee on the Electronic Records Archives. According to its charter, this committee brings together experts from many different fields to make recommendations to the Archivist on development of ERA, and its membership includes experts from private organizations with an interest in records management, members of academia, researchers, and state officials with responsibility for electronic records. NARA also established the Federal Records Council to provide advice and support from other federal agencies to the Archivist on all aspects of records management, with special emphasis on the management of electronic records. Membership on the council, as stated in its charter, includes representatives from OMB and GSA, officials from cabinet-level departments, and representatives from communities such as science, intelligence, and the federal court system. Several other organizations outside of these groups have also sent members to Federal Records Council meetings. Members are departmental records officers and officials from other divisions with records management responsibilities, such as information technology, information security and privacy, and Web content. According to its charter, the council contributes strategic advice and support to the Archivist in issuing records management guidance, and provides a mechanism for agencies to work together to identify strategies and best practices for electronic information and records issues. In one instance (the second strategic goal, involving the accessioning and processing of records), NARA did not set a specific strategy related to collaboration. However, in discussing this goal, the strategic plan refers to seeking out and developing partnerships to assist in improving work processes to deal with a backlog of holdings that had been accessioned, or legally transferred to NARA’s possession, but not yet processed. (Processing involves such steps as flagging records based on classification, providing enhanced descriptions of the content of the records, and making records available to the public.) In support of this goal, NARA has established, for example, a working agreement with a private company through which the company would provide metadata for digitized NARA information according to the agency’s standards. According to an official, having these metadata would provide the agency with more information about the content of the records, which assists in their processing. Table 9 summarizes NARA’s collaborative efforts related to its strategic goals and provides examples. NARA’s Open Government Plan describes further collaborative initiatives in addition to these. Issued in response to the Open Government Directive, NARA’s Open Government Plan describes its efforts aimed at increasing transparency, participation, and collaboration in government. Among these is a collaboration with the Department of Justice on the development of a dashboard that would provide information on agencies’ performance in fulfilling Freedom of Information Act requests. Another is collaborating with academic law and policy groups, such as the Legal Information Institute at Cornell University and the Center for Innovative Technology Policy at Princeton University. Furthermore, in accordance with the open government principles of transparency, participation, and collaboration, NARA has established new collaborative efforts with the public. For example, according to its Open Government Plan, NARA used Ideascale, a commercial collaboration platform, to gather perspectives from the public on the content of the plan. It has also established blogs and a wiki to further collaboration both with the public and other agencies. Its NARAtions blog and its Our Archives wiki are addressed to the public and research community that uses its archival holdings, both providing information and soliciting input. Its Records Express blog shares information and solicits comment from the federal records management community. NARA has taken steps to expand its oversight activities and improve their effectiveness. Although it cannot by itself ensure that agencies are managing records appropriately (agencies control and are responsible for their own records), NARA can use its oversight activities to help determine where records management improvements are most needed and improve its ability to influence agencies to give more priority to records management programs. This will require that it continue to build and improve its oversight activities, including studies, surveys, inspections, and reporting. As NARA continues to refine its approach to oversight, it will be important for it to consider how to validate self-assessment data (for example, by doing followup interviews) and how to strategically plan inspections to maximize their value as oversight tools, by, for example, defining key practices and inspecting these at multiple sites. Further, it will also be important going forward for NARA to assess the risk that its capacity to process and approve schedules may not be sufficient to meet the demand. As an agency with a broad mission, NARA faces numerous challenges, for which its strengths in seeking collaborative opportunities should be helpful. Further, NARA’s organizational responsibilities are generally aligned with its strategic plan, and it has developed a human capital strategic plan that, if implemented effectively, would give NARA the capability to strategically manage its human capital, as called for in our strategic human capital framework. However, there are opportunities for improvement. For a few specific strategies, NARA has not yet established clear lines and assignments of responsibility. In addition, the lack of adequate enterprisewide risk management leaves the agency vulnerable to a variety of risks that may not be foreseen or mitigated. Further, until NARA has implemented the capability to manage its human capital strategically, the risk remains that it will not have the staff with the skills needed to meet present and future mission needs. To help NARA improve its management and oversight capabilities, we are recommending that the Archivist of the United States take the following six actions: To help ensure that its future assessments of the status of governmentwide and agency records management are accurate, develop additional means to validate the self-reported data in its surveys. To ensure that its inspections program helps provide a comprehensive view of federal records management and greater impetus for agency improvement, develop a plan, with milestones, that provides for systematically and strategically targeting inspections to maximize their value as oversight tools. To help ensure that it can manage the backlog in the scheduling process, assess the risk that it will be unable to keep up with schedules submitted and develop plans to mitigate that risk, if indicated. To ensure that its organization and governance reflect its strategic goals and strategies, ensure that all the specific strategies in its strategic plan have clear lines and assignments of responsibility. To ensure that NARA’s senior staff and decision makers can appropriately and quickly assess threats and vulnerabilities stemming from enterprise risks, develop and assign responsibility and resources for an enterprisewide risk management capability that allows it to monitor its internal and external environments continuously and systematically. To ensure that it has the appropriate skills and staff to meet present and future needs, give priority to completing its skills, needs, and gap analyses and developing a plan to fill those gaps. We received written comments on a draft of this report from the Archivist of the United States. In these comments (reproduced in appendix II), NARA concurred with the six recommendations in the report. The Archivist stated that to address them, NARA plans to (1) develop and implement additional means to validate self-reported data from self- assessment surveys in fiscal year 2011, (2) develop a plan for systematically and strategically targeting inspections to maximize their value, (3) conduct a study of the risks associated with the backlog in its records scheduling process, and develop mitigation plans, (4) review the current strategic plan to make sure it can tie strategies to specific actions and targets, (5) roll out an enterprisewide internal controls program that uses risk assessment as an integral part of managing and monitoring internal controls, and (6) consider using an existing contract to draw in additional resources to assist NARA with completing its competency modeling initiative. The agency also supplied technical comments, which we have incorporated as appropriate in the final report. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. At that time, we will send copies of the report to interested congressional committees, the Archivist of the United States, and other interested parties. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have questions about this report, please contact me at (202) 512-6304 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. Our objectives were to (1) assess the National Archives and Records Administration’s (NARA) effectiveness in overseeing the governmentwide management of records, including commenting on its capacity to identify risk of unlawful destruction of federal records; (2) describe its ability to preserve permanent records; and (3) assess its policies, procedures, and plans supporting key management and oversight capabilities: governance, human capital, and collaboration. For all of our objectives, we reviewed NARA documentation of its records management and preservation activities, interviewed agency officials, and reviewed prior reports by us and others. We reached out to the federal records management community to obtain information on several issues. We obtained views of selected federal government records managers by contacting records officials at several components of the Department of Defense (DOD) and through an online survey of members of the Federal Information and Records Managers Council (FIRM), an organization of federal records managers. We also convened a panel of experts to obtain information about records management challenges and best practices and NARA’s oversight of federal records management programs. We worked with the National Academy of Sciences to choose a diverse group of panel members. The method for this was an iterative discussion with representatives of the Computer Science and Telecommunications Board at the National Academy of Sciences to determine which experts had expertise in areas most applicable to our objectives. The final panel included several former NARA employees, representatives of federal agencies that deal directly with NARA, and records management experts from the private sector and academia. The panel also included an expert in electronic records management, as well as one from the Smithsonian, the mission of which is similar to NARA’s in terms of records preservation. To assess NARA’s effectiveness in overseeing governmentwide records management we examined its use of the activities defined in 44 C.F.R. 29: surveys, studies, inspections, and reporting. We obtained input from the expert panel, from records managers at DOD, and from our survey of FIRM members. We also examined NARA’s process for approving records schedules and compared the numbers of schedules it has approved in recent years with estimates of the numbers of outstanding records series and systems. To comment on NARA’s capacity to identify risk of unlawful destruction of federal records, we reviewed applicable laws, reviewed the results of the agency self-assessment survey, and met with NARA records management staff to identify risk factors. To describe NARA’s ability to preserve permanent records, we met with NARA preservation staff and obtained input from the expert panel. We reviewed and assessed the reliability of NARA’s survey of its preservation needs and its backlog, and we analyzed its ability to process its backlog. To assess NARA’s ability to preserve electronic records, we reviewed external research and standards related to electronic records issues, interviewed staff involved in development of the Electronic Records Archives (ERA), and drew on our previous reports about the status of the ERA development process. To assess its policies, procedures, and plans supporting key management and oversight capabilities (governance, human capital, and collaboration), we did the following: We assessed NARA documents relating to governance, including strategic planning and policy documents, against requirements of the Government Performance and Results Act (GPRA) and our risk management framework. To assess whether NARA organization and performance measures were aligned with its strategic plans, we examined NARA’s directive that assigns responsibilities, as well as charters of temporary task forces, to determine whether lines of responsibility were clearly delineated for specific strategies in the strategic plan. We compared NARA’s risk management activities against GPRA requirements and our risk management framework. We evaluated NARA’s human capital management capabilities and its Human Capital Strategic Plan against our strategic human capital framework. We interviewed the Director, Human Resources Services Division, and the Director, Staff Development Services, and other officials. To assess NARA’s progress in implementing needed strategic human capital capabilities, we reviewed progress in implementing its Strategic Human Capital Plan against the plan’s milestones. We also analyzed NARA’s hiring process against Office of Personnel Management criteria, and examined the effects of reported recent improvements in the hiring process. We evaluated NARA’s collaboration capabilities by interviewing policy and planning staff, and analyzing agency policies and procedures related to collaboration. We obtained a list of NARA collaborative projects and examined whether collaborative activities specified in the strategic plan were being carried out. We conducted this performance audit from October 2009 to October 2010 in the Washington, D.C., metropolitan area in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, key contributors to this report were Barbara Collier (Assistant Director), Shaun Byrnes, Kami Corbett, Neil Doherty, Rebecca Eyler, Jason Kirwan, Lee McCracken, Glenn Spiegel, and Walter Vance.
The mission of the National Archives and Records Administration (NARA) is to safeguard and preserve government records, ensuring continuing access to the essential documentation of the rights of American citizens and the actions of their government. However, in today's environment of fast-evolving information technology, federal agencies are creating vast and growing volumes of electronic records while continuing to create physical records in large numbers. Accordingly, GAO was asked to assess NARA's effectiveness in overseeing the governmentwide management of records, including commenting on its capacity to identify risk of unlawful destruction of federal records; describe its ability to preserve permanent records; and assess its policies, procedures, and plans supporting key management and oversight capabilities (collaboration, governance, and human capital). To do so, GAO analyzed NARA documentation in these areas, interviewed agency officials, and reviewed prior work. The effectiveness of NARA's oversight has been improved by recent increases in its oversight activities: NARA has conducted its first governmentwide records management self-assessment survey, resumed agency inspections after a long gap, and expanded its reporting (including giving more complete information about specific agencies). These efforts have provided a fuller picture of governmentwide records management: in particular, NARA found that almost 80 percent of agencies were at moderate to high risk of unlawful destruction of records. Reporting of such results may also help influence agencies to give more priority to records management, which has historically been given low priority. However, these initiatives have limitations. For example, NARA's efforts to validate self-reported survey data are limited, as are its plans for inspections of agency records management; addressing these limitations could enhance the usefulness of these efforts as they continue to be developed. NARA also oversees agency records management through its review and approval of the schedules under which agencies may dispose of records. Following an extended effort to get agencies to schedule electronic records and systems, NARA increased the number of schedules approved per year. However, it has also increased the backlog of schedules awaiting its approval, increasing the risk that NARA's success in promoting scheduling could bring in more schedules than it can handle in a timely manner. NARA faces challenges in preserving permanent records largely because of their volume, the finite resources available, and the technological challenges posed by electronic records. NARA has a large and persistent backlog of records on paper and other media needing preservation actions. Although it treated nearly 116,000 cubic feet of at-risk archival records in fiscal year 2009, the percentage of backlog remained constant at about 65 percent, and holdings requiring preservation grew from about 2.4 million cubic feet in 2008 to about 2.6 million cubic feet in 2009. For electronic records, NARA has an electronic records archive system that is still under development and does not yet include planned preservation functions. Until the system and its preservation capabilities are fully implemented, there is reduced assurance that NARA can ensure the preservation of all electronic records. NARA's policies and procedures for collaboration, selected aspects of governance, and human capital are generally aligned with its strategic planning. For example, it is participating in numerous collaborative activities that support the goals and strategies in its strategic plan. However, more action is needed. For example, with regard to governance, although its organizational responsibilities are generally aligned with its strategic plan, NARA has not established an enterprise risk management capability, reducing its ability to anticipate future challenges and avoid potential crises. Finally, NARA has developed and begun to implement a strategic human capital plan, but this implementation has been delayed, which hinders the agency's ability to ensure that it has the workforce and skills it needs. GAO is making recommendations to help NARA build on its recent oversight activities and to fill gaps in its risk management and human capital management. In comments on a draft of the report, NARA concurred.
Franchises and business opportunity ventures represent large and growing segments of the retail and service sectors in the United States and are rapidly replacing more traditional forms of small business ownership in the U.S. economy. According to the International Franchise Association (IFA), about 75 industries—such as those involving fast food, service, maintenance, and lodging—operate within the franchise format to distribute goods and services to consumers. IFA estimates that there are 1,500 business-format franchises that operate more than 320,000 franchised units in the United States. IFA estimates that these franchises account for 50 percent of all retail sales and generate $1 trillion in sales annually in the United States. Data on the number and overall value of business opportunity ventures were not available, in part, because according to FTC staff, there is no national association or attorney group that represents business opportunities. In 1950, fewer than 100 companies used franchising in their marketing operations, but with the rapid expansion of franchising in the 1960s, federal and state governments began to see the need to protect prospective franchise purchasers. In 1971, FTC announced it would initiate a proceeding concerning the promulgation of a trade regulation rule on franchise sales and pre-sale disclosures. Public hearings on franchising commenced in 1972, and in 1978, FTC issued its final Franchise Rule, which took effect in October 1979. The Rule, which has the full force and effect of federal law, was promulgated in response to widespread evidence of unfair or deceptive acts or practices in connection with the sale of franchises and business opportunities. FTC provided the following distinctions, consistent with the Franchise Rule, between a franchise and a business opportunity: A franchise requires payment of at least $500 for the opportunity to sell trademarked goods and services with significant assistance or control of the franchisor. An example of a franchise is a fast food restaurant chain. To buy a franchise, the prospective purchaser would pay a required fee for the opportunity to sell the chain’s products. In turn, the chain would help the purchaser by (1) arranging for a store location, (2) providing training on how to prepare the products, and (3) providing advertising, among other things. The purchaser would agree to abide by the chain’s standards for cleanliness, quality, uniforms, and so on. A business opportunity requires payment of at least $500 for the opportunity to distribute goods and services of the seller with assistance in the form of locations or accounts. Business opportunities are less structured than franchises and impose fewer ties between the sellers and buyers. An example of a business opportunity is the purchase of vending machine routes, where the purchaser would pay a required fee for the opportunity to sell the company’s products (e.g., soft drinks, snack foods) through vending machines. The purchaser would buy the vending machines and products from the company, and the company would inform the purchaser of specific stores or locations in which to place them. The Franchise Rule is designed to enable prospective franchise and business opportunity owners to protect themselves before investing by providing them with the information needed to assess potential risks and benefits, make meaningful comparisons with other investments, and further investigate the business. This information is contained in detailed disclosure documents that must be provided to prospective purchasers at least 10 business days before they pay any money or legally commit to a purchase. The document includes financial and other information about the seller, the business, and the business relationship, including the name, address, and telephone number of other purchasers; a fully audited financial statement of the seller; the background and experience of the business’ key executives; the seller’s litigation history; the cost of starting and maintaining the business; and the responsibilities the buyer and seller will have to each other once the franchise or business opportunity is bought, including termination and renewal rights. Regarding the latter, the Franchise Rule requires the seller to disclose basic information about its policies and practices, including matters such as termination and renewal rights. However, the Franchise Rule does not prescribe the terms and conditions for carrying out those policies and practices. The Franchise Rule requires disclosures only to prospective purchasers. Franchise and business opportunity sellers do not register or file their disclosure documents with FTC, and FTC generally does not review or approve disclosures, advertising, or agreements. FTC’s Bureau of Consumer Protection enforces the Franchise Rule. According to FTC staff, during fiscal years 1997 through 1999, the Bureau spent an average of 13 workyears, or about 6 percent of its approximately 221 workyears, on Franchise Rule activities and enforcement. In addition to the Franchise Rule, FTC enforces section 5 of the FTC Act, which declares unlawful unfair or deceptive acts or practices in or affecting commerce. Section 5 also provides that FTC does not have authority to declare an act or practice unlawful (FTC’s “unfairness” jurisdiction) unless three specific criteria are met: (1) the act or practice causes or is likely to cause substantial injury to consumers, (2) the injury is not outweighed by countervailing benefits to consumers or to competition, and (3) the act or practice is not reasonably avoidable by consumers. According to FTC staff, in exercising its authority, FTC brings “deception” cases in many consumer protection fields, including the sale of franchises and business opportunities. In general, only FTC, not private parties, can enforce violations of the Franchise Rule or FTC Act. The FTC Act provides FTC with a broad range of remedies for violations, including injunctions, civil penalties, and refund of money to franchise and business opportunity purchasers. Also, in 1998, in conjunction with the National Franchise Council (NFC), FTC approved, on a trial basis, an Alternative Rule Enforcement Program to resolve technical or minor violations of the Franchise Rule that otherwise would be referred to the Department of Justice for civil penalty action. Franchisors FTC refers to the program are trained in Franchise Rule compliance and are monitored for a period of years. Moreover, potentially injured consumers are notified about the Franchise Rule violation and have the opportunity to resolve any claim, and possibly seek redress, against the franchisor through mediation. Violations involving fraud or unfair or deceptive business practices are not candidates for the program. As of April 2001, nine companies had been referred to the Alternative Rule Enforcement Program. States also have a role in regulating the sale of franchises and business opportunities. California passed the first franchise disclosure law in 1970. Today, 15 states have specific franchise disclosure laws and 24 states have specific business opportunity disclosure laws that are designed to protect prospective purchasers. Like the federal Franchise Rule, these state laws require franchise and business opportunity sellers to provide each prospective purchaser with a pre-sale disclosure document containing financial and other information. Unlike the Franchise Rule, some of these state laws require franchisors and business opportunities to file their disclosure documents with a designated state agency to review for accuracy and/or completeness. In 1995, as part of its continuing review of trade regulation rules, FTC announced that it was beginning to explore the need to revise the Franchise Rule. In October 1999, FTC published proposed revisions to the Rule, which focus exclusively on the sale of franchises. According to FTC, the proposed revisions would reduce inconsistencies in federal and state disclosure requirements governing franchise sales, address changes in the marketing of franchises—such as the sale of franchises through the Internet—and provide expanded disclosures concerning franchise relationships. FTC intends to conduct a separate rulemaking proceeding for business opportunities once it has completed the Franchise Rule review process because FTC views business opportunities as distinct business arrangements that require separate disclosure requirements. For example, FTC staff noted that many of the current Franchise Rule’s pre- sale disclosures do not apply to the sale of most business opportunities, which typically involve fairly simple contracts or purchase agreements. Because of pending comment periods and subsequent FTC review and approval activities, FTC staff told us they could not provide specific information on when the revised Rule would be issued. FTC’s Franchise Rule only addresses how a franchise or business opportunity is sold to a prospective purchaser. It generally does not regulate the nature of the agreement a prospective franchise or business opportunity venture purchaser may sign or changes in the relationship after the initial contract has been signed. FTC staff told us that FTC generally lacks the authority to intervene in private franchise contracts and related relationship issues. Rather, these issues are generally considered matters of contract law that traditionally have been governed at the state level. Currently, 17 states have enacted franchise relationship laws of general applicability to govern the franchise relationship after the agreement has been signed. These laws vary in their scope, with Iowa’s relationship law recognized as the most comprehensive. State franchise relationship laws generally provide for a private right of action that permits franchisees to bring lawsuits for violations under their respective state’s particular law. States that do not have specific disclosure or relationship laws have other laws to protect consumers, such as general consumer protection or fraud statutes. These other laws give parties the right to file a lawsuit directly in state court. (App. II lists the states that have business opportunity disclosure, franchise disclosure, and/or franchise relationship laws.) Currently, federal laws governing franchise relationships are specifically limited to the automobile and petroleum industries. Under the Automobile Dealers Day in Court Act of 1956, a franchise automobile dealer can bring an action in U.S. District Court against its automobile manufacturer to recover damages caused by the manufacturer’s failure to act in good faith in (1) performing or complying with any of the terms or provisions of the franchise agreement or (2) terminating, canceling, or not renewing the franchise. Under the Petroleum Marketing Practices Act of 1978, a franchisor engaged in the sale or distribution of motor fuel is prohibited from terminating a franchise during the term of the franchise agreement unless the termination or nonrenewal is based on grounds specified in the law. The act mandates a 90-day advance notice of the termination or nonrenewal, unless under the circumstances, it would be unreasonable to provide 90 days’ notice. The act also provides for franchisees to file a lawsuit against franchisors in U.S. District Court for failure to comply with the act’s requirements. The legislative histories of both acts indicated that they were needed to remedy the disparity of power between the franchisor and the franchisee. As mentioned earlier, Congress and others have debated whether a federal statute is needed to generally regulate franchising, particularly in regard to franchise relationship issues. Much of the debate has centered on the relative bargaining power franchisees have when dealing with their franchisors over various issues, such as the location of new franchise outlets or the termination of franchise relationships without good cause and advance, written notice. Various bills have been introduced in Congress that would have statutorily applied federal regulation to franchises in general. Among other things, these proposals would have expanded federal jurisdiction to include issues involving the relationship between franchisees and franchisors after the franchise agreement is signed. One bill, H.R. 3308, the Small Business Franchise Act of 1999, would have “established minimum standards of fair conduct in franchise sales and franchise business relationships.” According to the bill, the purpose of the act would be “to promote fair and equitable franchise agreements, to establish uniform standards of conduct in franchise relationships and to create uniform private Federal remedies for violations of Federal Law.” (App. III provides additional information on federal and state laws and regulations related to franchise relationship issues.) FTC has focused most of its franchise and business opportunity enforcement activities on business opportunity ventures because, according to FTC staff, problems such as fraud and other types of misrepresentation are much more prevalent with business opportunities than with franchises. In fact, complaints about business opportunity ventures, including those about fraudulent activity, have been much more common than those about franchises. FTC also focused most of its franchise and business opportunity investigations and court cases on business opportunities. From 1993 through 1999, FTC opened 332 investigations, most of which entailed business opportunity issues. From 1993 through 2000, FTC filed 142 business opportunity and 20 franchise cases in court and obtained some sort of relief in all of them. Although FTC has been successful with the cases it has pursued, we could not determine why FTC closed some of the business opportunity and franchise investigations it had opened because FTC did not require its staff to document why investigations are closed. From January 1993 through June 1999, FTC reported that it received 3,680 business opportunity and franchise complaints, of which 3,392 (92 percent) pertained to business opportunities and 288 (8 percent) pertained to franchises. According to FTC staff, although the complaint data in its database are the most comprehensive available, they do not necessarily provide a complete picture of all complaints that came to FTC from 1993 through June 1999. The FTC staff added that, for many reasons, complete data for earlier years (especially 1993 and 1994) do not exist. As a result, the FTC staff said that they would be reluctant to extrapolate from the complaint data that complaints have increased significantly since 1993. They added that more complete data for determining trends would be complaints filed in 1997 and beyond. Table 1 shows all of the business opportunity and franchise complaints FTC reported it received each year from 1993 through June 1999. According to FTC staff, the growth in the number of complaints documented during 1997 through June 1999 could be attributable to a number of things, including changes in the way FTC collects and compiles complaint data. For example, in 1998, FTC established a toll-free hotline and published a Web-based on-line complaint form, which allow consumers to report problems and allegations about such factors as abuses related to the Franchise Rule. In addition, FTC has received more complaints in recent years because it now has agreements with many groups—such as state Attorney General Offices and regional Better Business Bureaus—that collect and refer complaints for input into FTC’s Consumer Sentinel complaint database. FTC staff provided us with the results of FTC’s analysis of the 288 franchise complaints it received from January 1993 through June 1999. FTC’s analysis showed that 134 of the 288 franchise complaints did not contain sufficient information to determine the specific allegation that was being made. Of the remaining 154 complaints, FTC’s analysis showed that 13 alleged problems involving pre-sale disclosure issues covered by the Franchise Rule, such as failure to provide disclosure documents; 96 contained allegations pertaining exclusively to post-sale issues that are not covered by the Franchise Rule, such as threats to terminate a franchise relationship or failure to provide a promised franchise location; and 45 contained allegations involving both pre-sale disclosure issues covered by the Rule and post-sale issues not covered by the Rule. According to FTC’s Franchise Rule Coordinator, FTC has reviewed franchise and business opportunity complaints on a regular basis and has used more sophisticated methods as they have become available. From 1993 through 1997, for example, the Franchise Rule Coordinator said he manually prepared detailed monthly and annual reports of complaints and enforcement activities for distribution throughout FTC. In 1998, when FTC improved its data reporting and retrieval capabilities via its Consumer Response Center, the Franchise Rule Coordinator stopped preparing formal reports. Instead, he said he reviewed database files on a regular basis to identify potential investigations and trends, while other FTC staff also reviewed complaint data for investigative potential, especially in connection with law enforcement sweeps. The Franchise Rule Coordinator said that, beginning in January 2000, he requested monthly reports to aid him in reviewing franchise complaints. Consequently, since March 2000, FTC has generated monthly reports of all franchise complaints, which the Coordinator said he personally reviews for investigative potential. FTC has not analyzed each of the individual business opportunity complaints it has received, but FTC staff said that they believe that almost all of the business opportunity complaints represent pre-sale concerns about either fraud or misrepresentation—such as false or unsubstantiated earnings claims—that fall under FTC’s jurisdiction. The Franchise Rule Coordinator told us that FTC uses other means to evaluate business opportunity complaints. For example, he said that (1) staff from the Consumer Response Center review the business opportunity complaint data to look for patterns and practices of violations, (2) analysts in FTC’s Division of Planning and Information review complaint data for trends, and (3) federal and state enforcement officials discuss complaints during periodic conference calls with FTC staff. Since 1993, FTC focused most of its franchise and business opportunity investigations and court cases on business opportunities. According to FTC staff, these enforcement efforts were directed more heavily on business opportunities than franchises because FTC received more complaints on business opportunities and because fraud and other types of misrepresentation are much more likely to occur with business opportunities. FTC data showed that, from 1993 through 1999, FTC opened a total of 332 franchise and business opportunity investigations, of which 109 (33 percent) clearly involved business opportunities and 59 (18 percent) involved franchises. According to FTC’s Franchise Rule Coordinator, the remaining 164 (49 percent) investigations could not be clearly categorized from the information FTC had available because the investigating attorney did not note or was not able to determine whether the business was a franchise or a business opportunity. He also told us that although it is likely that more than 90 percent of these 164 investigations involved business opportunities, he could not provide exact numbers because FTC’s focus is on whether or not some type of violation occurred, not the type of business. Table 2 provides information on the number of franchise and business opportunity investigations FTC opened during 1993 through 1999. In regard to the fluctuations in the number of investigations FTC has opened from 1993 through 1999, FTC staff noted that the number of franchise investigations FTC opened decreased from 43 during 1993-94, to 16 during 1995-99. The FTC staff stated that the reasons for the decrease include the following. Between late 1994 and early 1995, FTC recognized that business opportunities represented a much larger problem than franchises. As a result, FTC began to focus its enforcement efforts on business opportunities. Franchise cases are much more complex than business opportunity matters and consume a significant amount of law enforcement resources. There are practical limits to the number of franchise investigations that staff can pursue at any one time because they are resource-intensive. FTC staff told us that the number of business opportunity and franchise investigations opened do not directly correlate with the number of complaints because (1) investigations are opened as a result of sweeps and other internal case generation activities, such as reviews of the Internet and newspapers, that are not necessarily complaint-based and (2) not all complaints get investigated. Regarding the latter, FTC staff explained that many complaints do not result in an investigation because they do not meet FTC’s criteria for opening an investigation. For example, depending on the type of problem alleged, the complaint may involve issues outside FTC’s jurisdiction. Also, FTC examines such things as the level of consumer injury and the number of consumers affected to determine whether it is in the public interest to open an investigation. Regarding the latter, FTC staff said that individual complaints may not show that a company has engaged in a pattern or practice of illegal conduct that would warrant opening an investigation. According to FTC’s analyses of the complaints it has received, the vast majority are isolated matters involving single complaints against companies. Based on these factors, most complaints FTC receives are not investigated. In addition, FTC staff told us that limited resources and other law enforcement priorities prevented FTC from pursuing every meritorious complaint it received involving franchises and business opportunities. (App. IV provides further information on the investigations process and the criteria FTC uses for deciding when to open investigations.) To better understand how FTC used its resources to carry out franchise and business opportunity investigations, we attempted to determine how long it took FTC staff to process and close investigations using the number of hours they billed for each of the 332 investigations opened from 1993 through 1999. However, information on hours billed was available for only 217 (65 percent) of the 332 investigations FTC opened throughout the period. The 217 investigations included 125 that were closed with no further legal action and 92 that resulted in cases being filed. For the 125 investigations that FTC closed with no further legal action, FTC staff billed from 1 to 3,367 hours, with an average time of 228 hours and a median time of 64 hours. For the 92 investigations for which FTC filed cases, FTC staff billed from 2 to 5,738 hours, with an average time of 887 hours and a median time of 628 hours. According to FTC staff, the overwhelming majority of the investigations for which no or few hours were billed involved business opportunities. The staff added that the reasons why staff may not have charged any or few hours include that (1) staff determined that the company was out of business, (2) a state or other law enforcement agency was already looking into the matter, (3) staff may not have billed for the time spent on the investigation, or (4) staff may have billed hours to projects that combined investigations (i.e., sweeps) rather than to individual investigations. FTC staff told us that FTC does not have specific written criteria or standards to measure whether it carried out its investigations in a timely manner. According to FTC staff, the amount of time it takes FTC staff to complete an investigation depends on several factors, including the facts and complexity of the case, the degree of cooperation obtained from the target of the investigation, and the competing demands of the staff responsible for the investigation. The staff told us that FTC’s associate directors receive regular updates from staff on pending investigations and that the bureau director also receives this information in regular meetings with the associate directors. Similar to its complaint and investigation data, most of the cases FTC filed in court for violations of the Franchise Rule and/or section 5 of the FTC Act involved business opportunities. From 1993 through 2000, FTC filed 162 cases in court for violations of the Franchise Rule and/or section 5 of the FTC Act–142 (88 percent) involved business opportunities and 20 (12 percent) involved franchises. Table 3 shows the distribution of business opportunity and franchise cases filed in court from 1993 through 2000 that involved the Franchise Rule and/or section 5 of the FTC Act. Not all of the investigations that FTC opened resulted in cases being filed in court. According to FTC staff, limited resources and other law enforcement priorities prevented FTC from pursuing every meritorious investigation involving franchises and business opportunities. The staff added that FTC generally pursues those court cases that it believes have the greatest likelihood of financial recovery for franchise and business opportunity purchasers or have the greatest deterrent effect for potential violators. Among the other criteria FTC uses to decide which cases to pursue are whether (1) the problem is an isolated event or part of a pattern or practice; (2) there is a viable, meaningful remedy; or (3) there are alternatives to federal intervention. (See app. IV for further information on FTC’s case selection criteria.) All litigated cases have resulted in such relief as court injunctions, civil penalties against franchisors, or monetary redress for investors. (App. V provides information on each case involving franchises and business opportunities that FTC filed in court from 1993 through 2000.) We reviewed a sample of files for business opportunity and franchise investigations FTC closed without taking further legal action to determine why FTC closed those investigations. We reviewed all 79 files for investigations FTC closed from 1997 through 1999 for which it took no further legal action. Specifically, we attempted to gather information on (1) the date the investigation was opened, (2) the reasons for closing the investigation, and (3) the date the investigation was closed. We reviewed all documentation in the file, including the Matter Initiation Notice, Matter Update Notice, and Matter Profile. Our results showed that, while supervisory approval had been obtained for the opening and subsequent closing of each of the investigations, only 2 of the 79 files contained documents showing the reasons why the investigations were closed. Thus, it was not clear why FTC did not take further legal action on the other 77 business opportunity and franchise investigations that it closed during the period. FTC staff told us that it is likely these investigations were closed either because of a lack of sufficient evidence of wrongdoing or the subject was out of business. However, the FTC staff did not have any documentation to support their explanation. According to the Comptroller General’s Standards for Internal Control in the Federal Government, all transactions and other “significant” events need to be clearly documented, and the documentation should be readily available for examination. During our review, we informed FTC staff that our report would likely contain a recommendation that FTC develop and implement procedures to require FTC staff to document the reasons why franchise and business opportunity investigations are closed. At that time, FTC staff told us that there was little, if any, historical value in reviewing past closed investigations of this type. The staff added that FTC staff has always been required to justify a recommendation to close an investigation in oral discussions with the assistant or associate directors who have responsibility for approving such requests. However, after further consideration, FTC staff determined that documenting the oral discussions was not unreasonable. Accordingly, in June 2001, the Associate Director for the Bureau of Consumer Protection’s Division of Marketing Practices issued a memorandum to all Marketing Practices staff to inform them of revised procedures related to franchise and business opportunity investigations that are closed without filing an action in court. More specifically, the revised procedures specify that each and every Matter Update Notice closing a franchise or business opportunity investigation must state the reason(s) why the investigation is being closed. FTC also modified its Matter Update Notice to include check boxes setting forth the most common reasons for closure. FTC uses various means, such as law enforcement summits and conference calls, to communicate and coordinate its franchise and business opportunity enforcement activities with the states. Regulatory officials from the nine states with franchise and business opportunity disclosure laws had mixed views about the effectiveness of FTC’s efforts. Generally, state business opportunity regulatory officials viewed FTC’s communication and coordination efforts as being more effective than did the state franchise regulatory officials we contacted. This may be due, in large part, to the fact that FTC’s communication and coordination efforts with state regulatory agencies during 1998 through 2000 have been primarily focused on business opportunity issues. “The Commission works closely with other federal agencies, states, and local authorities in a variety of coordinated law enforcement efforts and task forces, including individual cases involving fraud and deceptive advertising, efforts to boost industry compliance with rules and regulations, and consumer and law enforcement training programs.” FTC also reported that by sharing information and resources, joint efforts effectively target issues that have direct impact on consumers. According to FTC’s Franchise Rule Coordinator, FTC staff regularly communicate and coordinate business opportunity and franchise enforcement activities with state business opportunity and franchise regulatory officials through various means, including annual law enforcement summits, joint FTC-state enforcement actions, monthly telephone conference calls, and the Consumer Sentinel complaint database. We surveyed the eight business opportunity and nine franchise regulatory officials in the nine states that have both business opportunity and franchise disclosure laws to obtain their views on the effectiveness of FTC’s efforts to communicate and coordinate enforcement activities in their states, and we received responses from all of them. From our survey, 13 of the 17 state regulatory officials reported that, overall, FTC’s efforts to communicate and coordinate enforcement activities during calendar years 1998 through 2000 were either “very effective” or “somewhat effective.” All eight business opportunity regulatory officials who responded reported that FTC’s overall communication and enforcement coordination efforts in 1998 through 2000 were effective. Specifically, five officials reported that FTC’s efforts were “very effective,” and the other three officials reported that FTC’s efforts were “somewhat effective.” One state business opportunity regulatory official commented that informal communication and joint enforcement actions have been highly useful in promoting effective communication and networking opportunities. The majority of the state business opportunity regulatory officials we contacted have participated in annual law enforcement summits, monthly conference calls, and joint FTC-state law enforcement actions—all of which facilitate communication and coordination. In comparison with the state business opportunity regulatory officials, state franchise regulatory officials viewed FTC’s communication and coordination efforts as being less effective. Specifically, five of the nine state franchise regulatory officials we contacted viewed FTC’s communication and coordination efforts as being “somewhat effective,” and the remaining four viewed FTC’s efforts as being “not effective” because of their limited interaction with FTC on franchise issues. One franchise regulatory official commented that since annual summits and monthly conference calls focus primarily on business opportunity issues, they are generally not effective in assisting officials that enforce state franchise laws. In general, the survey indicated that state franchise regulatory officials are interested in more interaction with FTC, and among the suggestions were for FTC to (1) provide better feedback on the inquiries made and complaints referred by states, (2) take more franchise enforcement actions, and (3) promote more interaction through an electronic mail list. According to FTC’s Franchise Rule Coordinator, FTC has recently begun to work with state franchise regulators to develop an electronic mail list. Appendix VI provides further information on (1) the various means FTC uses to communicate information and coordinate business opportunity and franchise enforcement activities with state regulatory officials and (2) state regulatory officials’ views of the effectiveness of specific FTC efforts to communicate and coordinate enforcement activities during calendar years 1998 through 2000. Our survey of state regulatory officials showed that support for FTC to perform reviews of disclosure documents is mixed. While a majority of the business opportunity officials who responded to our survey would like to see FTC take on this responsibility, a majority of the state franchise regulatory officials who responded did not see a need for FTC to review disclosure documents. Specifically, we asked state business opportunity and franchise regulatory officials in the nine states that have both business opportunity and franchise disclosure laws whether FTC should review all or a random sample of disclosure documents for accuracy and/or completeness. Our survey results showed that, of the eight state business opportunity regulatory officials who responded to our survey, five responded that FTC should perform such reviews, two responded that disclosure document reviews should be left to state agencies, and the remaining official expressed no opinion. Of the nine state franchise regulatory officials who responded to our survey, two responded that FTC should perform such reviews, five responded that disclosure document reviews should be left to state agencies, and the remaining two officials expressed no opinion. According to FTC staff, FTC does not have a mandate nor the resources to review randomly selected or all disclosure documents. FTC staff further stated that because selected states already review disclosure documents, requiring FTC to perform such reviews would be costly and consume resources that could be better spent on other law enforcement activities. An official representing the North American Securities Administrators Association (NASAA) commented that state governments are generally better prepared to perform disclosure document reviews than is the federal government (i.e., FTC). In 2000, NASAA implemented a project to coordinate and streamline the franchise disclosure registration and review process. Eleven of the 12 states that require registration of disclosure documents and perform disclosure document reviews are part of the coordinated review project.The project is designed so that franchisors can register their disclosure documents in some or all registration states at one time; it is not mandatory, rather the franchisor must opt for it. The project is based on the premise that most franchisors do not mind responding to state franchise examiners’ comments regarding disclosure documents, but they want assurances that a disclosure document approved in one state will be approved in another. Disclosure documents approved through the review process are deemed to be in compliance with franchise disclosure laws in the states conducting the coordinated reviews. Therefore, except for California (the only review state not participating in the process), NASAA would deem the approved disclosure documents suitable for submission to franchisees nationwide. This would include all states that do not have a franchise disclosure law. The extent and nature of franchise relationship problems are unknown because neither FTC, franchise trade associations, nor state regulatory agencies have readily available, statistically reliable data—that is, the data available are not systematically gathered or generalizable—that would indicate the full scope of these problems. Based on the data it has collected, FTC recognizes that some franchisees experience franchise relationship problems or are otherwise dissatisfied with their franchise purchase. FTC staff maintain, however, that the data FTC has compiled, while not comprehensive, suggest that franchise relationship problems are isolated incidents and are not prevalent across all franchises. Various franchise trade association officials pointed to indicators or anecdotal information to support their views regarding franchise relationship problems, but none had any statistically reliable data on the extent and nature of these problems. Further, selected state regulatory officials did not have readily available, statistically reliable data on the extent and nature of franchise relationship problems. It may be possible to collect empirical data on the extent and nature of franchise relationship problems through a study of franchisors and franchisees—but there could be limitations to obtaining such data, as well as cost and time considerations. Nonetheless, such data might provide valuable insights as to whether a federal statute is needed to generally regulate franchise relationships. The data FTC has obtained to date, including franchisees’ complaints and comments it received during its process for revising the Franchise Rule, indicate that franchise relationship problems occur. However, according to FTC staff, these data tend to suggest that they are isolated incidents that are not prevalent across all franchises. For example, FTC complaint data showed that, from January 1993 through June 1999, FTC received 141 franchise complaints that contained allegations involving one or more franchise post-sale issues. Moreover, FTC data showed that few franchisors received more than one complaint in that the 141 complaints involved 102 separate franchisors, and that only 23 of the 102 franchisors received more than one complaint. FTC’s current assessment that franchise relationship complaints are likely isolated incidents seems to contradict an earlier statement made by FTC in its 1999 Notice of Proposed Rulemaking. In the notice, FTC stated that there were a “significant” number of complaints from franchisees pertaining to franchise relationship issues. FTC staff told us, however, that FTC’s characterization of complaints as “significant” pertained strictly to comments and concerns FTC received during the rulemaking process and are not comparable to the franchisee complaints contained in FTC’s complaint database. The staff noted that, based on the information it had at that time, FTC believed that the franchisees’ comments and concerns were “significant.” The staff added, however, that FTC’s subsequent analysis of the rulemaking record tends to confirm that franchise relationship concerns are isolated events involving a few franchisors. The FTC staff explained that since the Franchise Rule review process began in 1995, FTC has received comments or statements for the record from a total of 96 individual franchisees or trademark-specific franchisee associations. FTC staff noted that nearly half of the 96 submitted comments were identical form letters that discussed their general support for broader franchise relationship controls, but shed little, if any, light on their specific experiences. FTC staff also told us that more than half of the 96 comments raised issues involving only three franchisors. Moreover, the FTC staff told us that there was little consistency among the remaining individual comments, which covered a wide range of franchise relationship issues, such as concerns about franchise renewals, lack of performance, and lack of disclosure to existing franchisees. FTC staff said that, based on the information compiled during the process for revising the Franchise Rule, it was clear that some existing franchisees experience various franchise relationship problems or are otherwise dissatisfied with their franchise purchase. However, while FTC staff told us that FTC data suggest that franchise relationship problems are not widespread, they did not know the extent to which franchisees used other avenues—such as mediation, arbitration, or litigation—to address their concerns. As a result, FTC staff stated that FTC’s data are not sufficient to assess the overall extent of franchise relationship problems. FTC staff also stated that the isolated instances of franchise relationship problems do not justify FTC conducting a more widespread investigation of relationship issues or developing a new rule that addresses the terms and conditions of franchise contracts. The FTC staff told us that absent evidence of widespread franchise relationship abuses, the prudent approach is to continue to investigate instances of such abuses, where they occur, under FTC’s current unfairness authority (i.e., section 5 of the FTC Act). FTC staff noted, however, that FTC’s unfairness authority generally does not apply to franchise relationship issues. In fact, to date, FTC has conducted only two franchise investigations that were based solely on FTC’s unfairness jurisdiction. Both investigations were ultimately closed because FTC determined there was insufficient evidence to satisfy the section 5 unfairness criteria. FTC staff view pre-sale disclosure as the best available vehicle, within FTC’s statutory authority, to address franchise relationship issues. As such, FTC’s 1999 Notice of Proposed Rulemaking proposes to enhance the Franchise Rule’s disclosure requirements to provide prospective franchisees with additional information regarding the relationship before they commit to buying a franchise. FTC staff told us that this is consistent with FTC’s long-held view that free and informed choice is the best regulator of the market. According to FTC staff, proposed revisions to the Franchise Rule would, among other things, increase (1) franchisors’ disclosures about prior litigation with franchisees; (2) the information available to prospective franchisees concerning source of supply restrictions and the ability to use alternative goods; (3) the disclosures about how sites are selected and the nature of any training programs; and (4) information available about renewals, terminations, and transfers. The proposed revisions to the Rule would not address any issue that arises after franchise agreements have been signed. That is, the changes would relate to pre-sale disclosure, but would provide no additional post-sale protections. Finally, FTC staff told us that FTC’s analysis of complaints and other evidence it has collected is not sufficient to enable them to assess the need for new federal franchise relationship legislation. Rather, FTC staff said that the various franchise trade associations that represent franchisors and franchisees may be in a better position than FTC to explain the competing views on the need for legislation, as well as the consequences flowing from each, and would have the best statistics and policy analyses related to any proposed legislation. Officials from the four franchise trade associations we contacted—the American Franchisee Association (AFA), the American Association of Franchisees and Dealers (AAFD), the International Franchise Association (IFA), and the National Franchise Council (NFC)—told us that they were not aware of any statistically reliable data that quantify the extent and nature of franchise relationship problems. Absent such data, the officials provided indicators or anecdotal evidence that supported their particular positions about franchise relationship problems. For example, the president of AFA—a group that supports a federal statute to generally regulate franchises—said that at the organization’s annual Franchisee Leadership Summit in April 2001, the 25 franchisee leaders of independent associations that attended reached consensus that the top concerns were (1) encroachment (the franchisor placing additional franchise locations in close proximity to an existing franchisee); (2) sourcing of supplies (where franchisees are required to buy all products used in their businesses from the franchisor or someone it designates, often at above-market prices); (3) equity/transfer/renewal issues (where franchisees cannot sell the business they own or, upon transfer or resale, franchisees have to offer the then-current contract with materially different terms); and (4) system compliance, including franchisors’ ability to arbitrarily make material changes to the franchise system. AFA did not, however, have any data on the extent to which these problems occur. In contrast, the senior vice president for government relations and chief counsel of IFA—a group that opposes a federal statute to generally regulate franchises—told us that all “reliable” indicators, such as FTC enforcement data and complaints brought alleging violations of the IFA Code of Ethics, show that there are relatively few franchise relationship problems. The official added that if the more than 1,000 franchises represented by IFA had serious problems, these problems would have surfaced by now. The IFA official told us that while litigation between franchisors and franchisees is relatively infrequent, on balance, termination appears to be the issue more likely to result in litigation than other issues. The official added that other types of issues that arise during the course of the franchise relationship—such as encroachment, transfer, or the general conduct of the parties—are much more likely to be resolved using other dispute resolution processes, such as internal dispute resolution, mediation, or arbitration. IFA did not, however, have any statistically reliable data on the extent to which these types of problems occur. Some of the franchise trade association officials we contacted told us that one way to assess the extent and nature of franchise relationship problems would be to conduct an extensive review of franchise litigation, such as cases reported in court records, franchisor disclosure documents, or in the Commerce Clearinghouse Business Franchise Guide. However, such a review would be costly and time-consuming and because each case is unique and is based on different facts, issues, and circumstances and involves the application of different state laws, the results of such a review would not be generalizable. Moreover, we were informed that such a review would not provide a sound basis from which to draw conclusions regarding the extent of franchise relationship problems because not all franchise relationship disputes are litigated. Some disputes are resolved through arbitration, mediation, or other dispute resolution processes. Our work, including discussions with officials from the American Arbitration Association and the National Franchise Mediation Program, revealed no statistically reliable data on the extent to which arbitration and mediation are used to resolve franchise relationship disputes. Absent statistically reliable data on the extent and nature of franchise relationship problems, the four franchise trade associations we contacted provided divergent views on franchise relationship problems and the need for federal franchise relationship legislation. On one hand, in general, AFA and AAFD officials maintain that an imbalance of power exists between franchisors and franchisees, and they contend that franchise contracts are oppressive. They also maintain that current federal and state pre-sale disclosure laws and state franchise relationship laws are ineffective in addressing franchise relationship issues. AFA is a proponent of comprehensive federal franchise relationship legislation, whereas AAFD would prefer legislation that encourages negotiated franchise relationships. On the other hand, IFA and NFC officials maintain that franchise relationship issues are matters of contract law that should be addressed at the state level, and they contend that franchisees can obtain relief from problems under well-established common-law doctrines. They also maintain that pre-sale disclosure is the best way to protect prospective franchisees. IFA and NFC are opponents of federal legislation that would regulate franchise relationships. (App. VII contains additional information on franchise trade associations’ views on the need for federal franchise relationship legislation.) Franchise regulatory officials in seven of our nine selected states told us their states did not maintain data on franchise relationship problems. Officials in the other two states told us that, while their state had some data on post-sale complaints, the data were either not representative of all such complaints or were not readily available. More specifically, one of the two officials told us that since the state’s franchise disclosure law generally does not regulate relationship issues, the complaints received are not representative of all post-sale complaints. The other official told us that the number of post-sale complaints is not readily available because such complaints are not differentiated from pre-sale complaints. The same officials had mixed views on the need for a federal statute that would regulate franchise relationships. Of the nine officials, three reported that federal legislation is needed, two reported that legislation is not needed, three did not specifically comment on the need for legislation, and one noted that it is a “philosophical” question that depends on the relative bargaining position and strength of the parties involved. Of the three officials who responded that federal legislation is needed, two noted the need to deter franchisor abuses or to provide additional franchisee protections in several areas, while the third official noted the need to level the playing field between franchisees and franchisors. Of the two officials who responded that federal legislation is not needed, one noted that franchise relationships are contractual issues under which franchisees currently have a private right of action (to file a lawsuit directly in state court), while the other official did not provide reasons. Our work revealed that empirical data on the extent and nature of franchise relationship problems could be gathered through a study of franchisors and franchisees. While there could be barriers or limitations to obtaining such data, as well as cost and time considerations, such a study could provide valuable insights on the need for a federal statute that covers franchise relationships. In addition to gathering empirical data on the extent and nature of franchise relationship problems, a study could be used to obtain data on franchisor and franchisee experiences with existing remedies for resolving disputes, such as judicial remedies or other dispute resolution processes. When designing a study of this nature, one would have to consider that the results may not be generalizable to the universe of current franchisors and franchisees because of the difficulty in identifying and locating them, especially those in states that do not require franchisors to file their disclosure documents with a state agency. According to FTC staff and trade association officials, there is no comprehensive information on the number and location of franchisors and franchisees. Furthermore, in doing such a study, FTC staff suggested that it may be important to consider the views and experiences of former franchisees—a group that, according to FTC staff, may be difficult to locate. We also explored which federal agency or agencies have the expertise and would be willing to conduct or oversee a future study on franchise relationship issues. FTC staff told us that FTC lacks the expertise and resources to perform this type of research, and suggested that we contact the Department of Commerce and SBA. An official with the Department of Commerce’s International Trade Administration (ITA) told us that, in the 1980s, ITA had prepared an annual report on franchising in the economy. However, the official said that ITA no longer does research on domestic franchise issues and is no longer positioned to conduct this type of research. The official added that a study of domestic franchise relationship issues generally would not be within ITA’s core mission, and further noted that ITA does not have the in-house expertise, structure, or resources to conduct or oversee such a study. In contrast, SBA’s Acting Chief Counsel for Advocacy said that, if properly funded, SBA’s Office of Economic Research within the Office of Advocacy would be able to contract out and oversee a study of franchise relationship issues. According to SBA, the Office of Advocacy’s mission is to study the role of small business in the American economy and to work for policies and programs that will create an environment to foster small business growth and development. SBA’s Acting Chief Counsel for Advocacy and the Acting Director of the Office of Economic Research said that SBA has the capability and expertise to develop a Request for Proposal, solicit and evaluate proposals, award and oversee a contract, and review and publish results. The officials added that the Office of Advocacy has contracted for other studies on franchising during the 1990s. During our review, we found that FTC did not require its staff to document the reasons for closing franchise and business opportunity investigations that resulted in no further legal action. Our review of all 79 files for investigations FTC closed from 1997 through 1999 for which it took no further legal action showed that, while supervisory approval had been obtained for closing each investigation, only 2 of the 79 files documented the reasons why the investigations were closed. FTC’s failure to document the reasons for closing investigations represented an internal control weakness as defined by the Comptroller General’s Standards for Internal Control in the Federal Government. Given the number of hours FTC staff billed, on average, for investigations that FTC later closed and took no further action, closing an investigation is a significant event, and as such, federal internal control standards require that the reasons for such decisions be documented and readily available for examination. Based on our work and subsequent discussions with FTC staff, FTC revised its procedures to require staff to document the reason(s) for closing franchise and business opportunity investigations that result in no further legal action. Over the past several years, Congress and others have debated the need for a federal statute to regulate franchises and address problems that can arise after the sale of a franchise. Our work revealed no readily available, statistically reliable data on the overall extent and nature of these problems. The absence of such data makes it difficult to determine the nature of any problems and the extent to which they occur, or whether a federal statute is warranted to resolve such problems. Although Congress can consider franchise relationship legislation without this information, a study on the extent and nature of franchise relationship problems—as well as an examination of franchisor and franchisee experiences with existing remedies for resolving disputes, such as judicial remedies or other dispute resolution processes—could provide lawmakers with a better framework or basis for considering whether there is a need for a federal statute that would generally regulate franchise relationships. Such a study could be led by SBA’s Office of Advocacy, FTC, or another federal entity, with work performed by an independent research organization. However, potential data limitations, as well as cost and time considerations, are factors that should be considered when weighing the pros and cons of conducting such a study. If Congress believes that it needs empirical data before considering franchise relationship legislation, it could commission and fund a study that would (1) design and implement an approach for collecting empirical data on the extent and nature of franchise relationship problems and (2) examine franchisor and franchisee experiences with existing remedies for resolving disputes. We requested comments on a draft of this report from the FTC Chairman and the SBA Acting Administrator. In a letter dated July 16, 2001, which is reprinted in appendix VIII, the FTC Chairman said that our report correctly recognized the nature, focus, and jurisdiction of FTC's enforcement activities relating to the Franchise Rule. He also noted that based on comments we provided during the course of our review, FTC has revised its procedures to document the reasons for closing franchise and business opportunity investigations that result in no further legal action. The FTC Chairman was silent on FTC's potential involvement in the study mentioned in the Matter for Congressional Consideration. In a letter dated July 16, 2001, which is reprinted in appendix IX, the SBA Acting Administrator said that SBA has a longstanding record of assisting franchisees through financial assistance, technical assistance, and business counseling. He stated that SBA's Office of Advocacy has conducted studies on franchising activity and noted that, as discussed in our draft, the Office of Advocacy is mentioned as being able to conduct such a study if additional funds were appropriated for this purpose. However, he also pointed out that the franchise data necessary to support such a study does not presently exist—the data are either dated or limited in scope—and would need to be created before a study could be conducted. We recognize that there could be barriers or limitations to obtaining data on the extent and nature of franchise relationship problems, as well as cost and time considerations. These are factors that should be considered when weighing the pros and cons of conducting such a study. We also recognize that federal agency involvement in this study will likely require that additional funds be appropriated. However, such a study could provide a better framework for considering whether there is a need for federal franchise relationship legislation, especially since the absence of such data makes it difficult to determine the extent and nature of franchise relationship problems. In addition to the above comments, FTC provided technical comments, which we incorporated in this report, where appropriate. We also contacted officials with the various trade associations to verify the information they provided and incorporated their comments, where appropriate. We are providing copies of this report to the Chairman and Ranking Minority Member, Senate Committee on Commerce, Science, and Transportation; Chairman and Ranking Minority Member, Senate Committee on Small Business; Chairman and Ranking Minority Member, House Committee on Energy and Commerce; and the Chairman and Ranking Minority Member, House Committee on Small Business. We are also sending copies of this report to the Chairman of the Federal Trade Commission and the Administrator of the Small Business Administration. We will also make copies available to other interested parties upon request. Please contact me or John Mortin on (202) 512-8777 if you or your staff have any questions. Other key contributors to this report were Nelsie Alcoser, Christopher Conrad, Eric Erdman, Susan Michal-Smith, and Gregory Wilmoth. Our objectives were to describe (1) FTC’s efforts to enforce its Franchise Rule, including FTC’s analysis of complaints and actions taken regarding franchises and business opportunity ventures; (2) FTC’s efforts to communicate and coordinate its franchise and business opportunity enforcement activities with selected state regulatory officials; and (3) the availability of data on the extent and nature of franchise relationship problems. We also obtained information on the views of FTC staff, franchise trade association officials, and selected state regulatory agency officials regarding the need for federal legislation on franchise relationships. To address these objectives, we performed our work primarily at FTC headquarters in Washington, D.C. and with franchise trade association and regulatory officials in Washington, D.C., Chicago, IL, and Baltimore, MD. We also contacted franchise and business opportunity regulatory officials in the nine states that have both franchise disclosure and business opportunity disclosure laws (California, Illinois, Indiana, Maryland, Michigan, Minnesota, South Dakota, Virginia, and Washington). We discussed franchise relationship issues with officials from various associations that represent or deal with franchisors and/or franchisees— the American Arbitration Association, the American Association of Franchisees and Dealers (AAFD), the American Bar Association’s Forum on Franchising, the American Franchisee Association (AFA), FRANDATA Corporation (a supplier of information to and about franchises), the International Franchise Association (IFA), the International Society of Franchising, the North American Securities Administrators Association (NASAA), the National Franchise Council (NFC), and the National Franchise Mediation Program. We also discussed franchise relationship issues with state legislative officials and attorneys representing franchisors and franchisees in Iowa since Iowa has been recognized by franchise trade officials as having the most comprehensive franchise relationship law of all the states. To address the first objective concerning FTC’s efforts to enforce its Franchise Rule, including FTC’s analysis of complaints and the actions it took regarding franchises and business opportunities, we met with staff from FTC’s Division of Marketing Practices in the Bureau of Consumer Protection and its Office of the General Counsel. Specifically, we gathered and analyzed information and documentation on FTC’s regulatory practices, enforcement, and oversight of franchises and business opportunity ventures. We also obtained and reviewed applicable laws, regulations, and FTC documents pertaining to the history of FTC’s efforts to promulgate, revise, and enforce compliance with its Franchise Rule. Further, we reviewed FTC’s Operating Manual to determine FTC’s policies and procedures for initiating and carrying out Franchise Rule investigations. As agreed with your staffs, we focused on the business opportunity and franchise complaints FTC received and investigations and court cases FTC initiated from 1993 through the most recent date available and differentiated, where possible, between (1) franchises and business opportunities and (2) pre-sale disclosure and post-sale relationship issues. In regard to complaints, we analyzed the business opportunity and franchise complaints FTC received from January 1993 through June 1999,to determine the number of business opportunity and franchise complaints FTC received, as well as whether the individual franchise complaints involved a pre-sale disclosure or a post-sale relationship issue. Our analyses of the complaint data relied on FTC’s separation of the franchise complaints from the business opportunity complaints. We did not independently verify the accuracy of FTC’s categorization of the complaints or the completeness of the complaint data FTC provided. However, we did verify that the complaint data FTC provided during our review was consistent with data published in a June 2001 FTC report entitled, Franchise and Business Opportunity Program Review 1993-2000: A Review of Complaint Data, Law Enforcement and Consumer Education. According to FTC staff, this report was prepared as part of FTC’s efforts to conduct a separate rulemaking proceeding for business opportunities once it has completed the Franchise Rule review process. Regarding FTC’s investigation and case activities, we reviewed the criteria FTC uses to determine when to act on complaints it receives, and in general, the reasons why FTC does or does not open an investigation based on complaints. We also determined the number, type, and outcomes of the business opportunity and franchise investigations FTC initiated each year from 1993 through 1999; the criteria FTC uses to decide which investigations to open and which court cases to file; and the reasons why FTC did or did not take action on closed investigations. We also obtained information on the number, type, and outcomes of the business opportunity and franchise cases that FTC filed in court each year during 1993 through 2000. However, we did not independently verify FTC’s process for deciding which cases to investigate and which to pursue in the courts and, therefore, do not know whether FTC took action on the most appropriate and promising cases. Finally, we sought to determine the extent to which FTC documented the reasons for closing the investigations, by examining the 79 investigation files for those business opportunity and franchise investigations closed from 1997 through 1999 for which FTC took no further legal action. Specifically, we used a structured data collection instrument to gather information from each of the 79 investigation files on (1) the date the investigation was opened, (2) the source of the investigation (i.e., sweep, consumer complaint, etc.), (3) the potential problem or violation being investigated, (4) the reason(s) for closing the investigation, and (5) the date the investigation was closed. As part of our review, we reviewed all documentation in the file, including the Matter Initiation Notice, Matter Update Notice, and Matter Profile. We did not compare the complaint data provided by FTC with the complaint data reported in our 1993 report primarily because, according to FTC staff, they had not analyzed the individual franchise complaints cited in the 1993 report to remove inquiries from actual complaints, and the 1993 report did not differentiate between business opportunity and franchise complaints. Furthermore, we did not compare the data collected from FTC on FTC Franchise Rule investigations with the results of our 1993 report because the 1993 report did not differentiate between franchise and business opportunity investigations. In addition, FTC no longer carries out investigations the way it did in 1993. For example, FTC used to distinguish between initial phase and full phase investigations, but it no longer makes that distinction. To address the second objective concerning FTC’s efforts to communicate and coordinate its franchise and business opportunity enforcement activities with selected state regulatory officials, we interviewed FTC staff to identify FTC efforts to assist states in enforcing franchise and business opportunity laws. Then, using a structured data collection instrument, we contacted business opportunity and franchise regulatory officials in the nine states that have enacted both franchise disclosure and business opportunity laws. Specifically, we contacted cognizant officials from the following agencies within each of the states: California. Office of the Attorney General, Consumer Law Section; and the Business, Transportation and Housing Agency, Department of Corporations; Illinois. Office of the Attorney General, Franchise Bureau; and the Office of the Secretary of State, Securities Department; Indiana. Office of the Attorney General; and the Office of the Secretary of State, Securities Division; Maryland. Office of the Attorney General, Securities Division; Michigan. Office of the Attorney General, Consumer Protection Division; Minnesota. Department of Commerce, Enforcement Division; South Dakota. Department of Commerce and Regulation, Securities Virginia. State Corporation Commission, Division of Securities and Retail Washington. Department of Financial Institutions, Securities Division. The views of state regulatory officials from these agencies are not generalizable to other states. As part of our audit work addressing FTC’s coordination efforts, we also explored the issue of whether FTC should perform reviews of franchise and business opportunity disclosure documents—a function FTC does not currently perform. To address this issue, we contacted business opportunity and franchise regulatory officials from the nine states listed above, as well as from NASAA. Further, we discussed the feasibility of FTC performing such reviews with staff in FTC’s Division of Marketing Practices within the Bureau of Consumer Protection and in its Office of the General Counsel. To address the third objective concerning the availability of data on the extent and nature of franchise relationship problems, we interviewed staff from FTC’s Division of Marketing Practices within the Bureau of Consumer Protection and its Office of the General Counsel. We also interviewed officials from four franchise trade associations (AAFD, AFA, IFA, and NFC), whose membership, in general, consists of the following. AAFD primarily represents the rights and interests of franchisees. AAFD has about 6,000 members, including franchisees who own and operate more than 14,000 franchised outlets. AFA primarily represents the rights and interests of small business franchisees. AFA represents about 14,000 small business owners of more than 30,000 franchised outlets. IFA primarily represents the rights and interests of franchisors and franchisees. IFA represents about 800 franchisor members, 2,000 individual franchisee members, and 30 franchisee associations and councils representing another 30,000 franchised outlets. NFC primarily represents the rights and interests of large franchisors (i.e., companies with franchise systems of more than 200 units that have been operating for at least 5 years in compliance with applicable franchise laws, rules, and regulations). NFC represents 16 companies that operate over 40 national franchise systems. Further, we contacted officials from franchise regulatory agencies in the nine selected states, as well as officials from various franchise associations, including the American Arbitration Association, the American Bar Association’s Forum on Franchising, FRANDATA Corporation, the International Society of Franchising, NASAA’s Franchise and Business Opportunity Project Group, and the National Franchise Mediation Program. We also contacted cognizant FTC staff and officials from franchise trade associations and selected states to gather their views on the need for federal franchise legislation. Moreover, we interviewed FTC staff, an official from the Department of Commerce’s International Trade Administration, and officials from the Small Business Administration’s Office of Advocacy to determine if their agency has the expertise and would be willing to conduct or oversee a future study on franchise relationship issues. Finally, we researched FTC’s role in addressing post-sale relationship issues, including the scope and applicability of section 5 of the FTC Act, and interviewed FTC staff about their role regarding these issues. We also reviewed the legislative histories of federal franchise laws covering the automobile and petroleum industries and reviewed the 17 state franchise relationship laws of general applicability that were identified in the Commerce Clearinghouse Business Franchise Guide. We did not, however, compare the laws or analyze their appropriateness. Further, we reviewed the transcript from a congressional hearing on franchise relationship issues, and we reviewed the Small Business Franchise Act of 1999 (H.R. 3308), as introduced in the 106th Congress, which, if passed, would have established federal jurisdiction over franchise relationship issues. In addition, we interviewed the state senator from Iowa who was involved in passing Iowa’s franchise relationship law and franchise attorneys who lobbied for and against it. We conducted our work between August 2000 and June 2001 in accordance with generally accepted government auditing standards. We discussed the results of our work with responsible FTC staff and SBA officials and have incorporated their comments, where appropriate. We also contacted officials at AAFD, AFA, IFA, and NFC to verify information they provided and incorporated their comments, where appropriate. Shaded states have business opportunity, franchise disclosure, and franchise relationship laws. These 12 states require registration of disclosure documents and have staff that review documents. Franchising is a form of business relationship based on a contract. Except for the automobile and petroleum industries, federal laws do not address the franchisor-franchisee relationship. During the 1990s, Congress considered several proposals for federal legislation on franchise relationships, but none became law. FTC traditionally does not regulate or set the terms of private contracts in franchising or in any other economic sector. Absent specific federal franchise statutes or regulation, franchise relationships are generally considered matters of contract law that traditionally have been regulated at the state level. “Hearings conducted by Congress contained numerous instances of automobile manufacturers coercing and intimidating their franchised dealers. A primary source of the manufacturers power over their dealers stems from the unilateral nature of the franchise agreements. Automobile dealers have been subjected to economic duress and intimidation and have been unable to obtain redress in the courts. The bill assures the dealer an opportunity to secure a judicial determination in the courts regardless of the contract terms as to whether the automobile manufacturer has failed to act in good faith in performing or complying with any of the provisions of his franchise or in terminating, canceling or not renewing his franchise.” “In recent years the friction between franchisors and franchisees in marketing of motor fuels has become so great that it had threatened adverse impacts upon the Nation’s motor fuel distribution and marketing system. Numerous states have initiated various legislative actions to address these petroleum product franchising problems. These actions have unfortunately resulted in an uneven patch work of rules governing franchise relationships which differ from State to State. Needed is a single, uniform set of rules governing the grounds for termination and non-renewal of motor fuel marketing franchises and the notice which franchisors must provide franchisees prior to termination of a franchise or non- renewal of a franchise relationship.” Since 1992, several separate proposals for additional franchise relationship legislation have been introduced in Congress, none of which became law.For example, the Small Business Franchise Act of 1999 (H.R. 3308), proposed, among other things, a comprehensive scheme for regulating the franchise relationship and included provisions on contract terminations, and transfers; encroachment; the purchase of goods or services from designated sources of supply; and franchisees’ rights to associate with other franchisees. The bill also provided franchisees with the right to file a lawsuit against franchisors for violations of the act. As previously mentioned, FTC’s Franchise Rule only addresses how a franchise is sold to a prospective purchaser. It generally does not regulate the nature of the agreement a prospective franchise purchaser may sign or changes in the relationship after the initial contract has been signed. FTC staff told us that FTC generally lacks the authority to intervene in private franchise contracts and related relationship issues. FTC generally does not have specific statutory authority to intervene in or regulate private contractual matters, including franchise contracts. According to FTC, the only relevant authority it has that could possibly relate to franchise relationships is section 5 of the FTC Act, which declares unlawful unfair or deceptive acts or practices in or affecting commerce. Section 5 also provides that for FTC to declare an unfair act or practice unlawful (known as FTC’s “unfairness” jurisdiction), three specific criteria must be met: (1) the act or practice causes or is likely to cause substantial injury to consumers, (2) the injury is not outweighed by countervailing benefits to consumers or to competition, and (3) the act or practice is not reasonably avoidable by consumers. According to FTC, given these criteria, its unfairness jurisdiction generally does not give FTC authority to reach the substantive provisions of franchise contracts or otherwise intervene in franchise relationship issues. FTC staff provided further information on FTC’s unfairness jurisdiction criteria as discussed below. Substantial injury. According to FTC staff, in order for FTC to exercise its unfairness jurisdiction over the terms and conditions of franchise contracts, there must be evidence of substantial injury. Complaints alleging oppressive contract terms and conditions generally assert that they cause or threaten to cause significant monetary injury to the complainant. FTC staff, added, however, that they seldom see more than a few atypical complaints of this nature about any particular franchise system. Thus, according to FTC staff, in many cases, the “substantial” injury element of the unfairness criteria cannot be met. Countervailing benefits. According to FTC staff, a more difficult issue is countervailing benefits. Franchise systems, like all businesses, are influenced by market forces. Consumer tastes change, and competition may arise unexpectedly. Accordingly, franchisors may desire to create contracts that maximize their ability to respond quickly to market forces. For that reason, a franchisor, for example, may wish to reserve the right to offer franchises on a nonexclusive basis or to reserve the right to sell goods and services through alternative channels of distribution. This enables the franchisor to move quickly to meet the competition if a new territory opens or distribution method arises. Other terms and conditions are designed to ensure system uniformity, which consumers often expect from a franchise system. Therefore, in many instances, a franchisor’s choice of contract terms and conditions are based upon some economic rationale that is designed to benefit consumers and/or the system’s existing franchisees. According to FTC staff, the benefits flowing from these contractual terms may, in some cases, outweigh the allegations of “oppression” by complainant franchisees. Unavoidability. According to FTC staff, when considering the substantive terms and conditions of franchise contracts, unavoidability is the most difficult standard to satisfy. Franchises are discretionary purchases. That is, no aspiring entrepreneur is forced to purchase a franchise in order to be in business. Moreover, franchising is only one method of entering into a business. Franchising also covers a wide variety of economic sectors, and for the most part, there is competition in each sector. Therefore, the market offers many choices for anyone wishing to operate a business. According to FTC staff, under these circumstances, existing franchisees would be hard-pressed to establish that contractual provisions they voluntarily read, agreed to, and signed were somehow unavoidable. The FTC staff added that proving this is an even more daunting task, because prospective franchisees are required to receive a disclosure document at least 10 business days before they sign the franchise agreement or pay any fee. Presumably, every prospective franchisee has the opportunity to (1) review the disclosure document before signing the contract; (2) seek legal, accounting, or marketing counsel; and (3) speak to both former and current system franchisees. In short, according to FTC staff, it is not FTC’s role to second-guess a prospective franchisee’s wisdom in signing a particular franchise agreement, as long as the prospective franchisee is forewarned about the legal consequences of his or her actions. According to FTC staff, isolated instances of miscellaneous relationship issues cannot justify a more widespread investigation of relationship issues, let alone substantive rulemaking that addresses franchise contracts. The staff added that before FTC could consider developing a rule that addresses the substantive terms of private franchise contracts, it would need not only evidence of substantial injury, but also sufficient information that would enable FTC to weigh the alleged injury against any countervailing benefits to the public at large or to competition. In addition, FTC staff noted that FTC would need evidence showing that franchisees cannot reasonably avoid the alleged injury. The staff further stated that while franchisees and their advocates suggest that economic harm to individual franchisees may result from some franchisor practices, they have not shown to date that such injury is substantial and not outweighed by countervailing benefits. Further, FTC staff told us that in at least some instances, prospective franchisees could avoid injury by comparison shopping for a franchise system that offers more favorable terms and conditions and by considering alternatives to franchising as a means of business ownership. Absent evidence on widespread franchise relationship abuses, FTC believes the prudent approach is to continue to investigate instances of such abuses, where they occur, under FTC’s current unfairness authority. According to FTC staff, application of FTC’s unfairness jurisdiction in a franchise matter is most likely to occur in a situation in which a franchisor attempts to unilaterally modify a contract or breach a contract with franchisees. They noted that in most instances, such conduct is unavoidable. Nonetheless, for FTC to find unfairness, there still must be substantial injury that is not outweighed by countervailing benefits. To date, FTC has conducted only two franchise investigations that were based solely on FTC’s unfairness jurisdiction, both involving an allegation of a franchisor’s breach of contract. Both investigations were ultimately closed because FTC determined there was insufficient evidence to satisfy the section 5 unfairness criteria. As previously mentioned, franchise relationships are generally considered matters of contract law that traditionally have been governed at the state level. We identified 17 states that have enacted general franchise relationship laws that specifically regulate certain aspects of the relationship after the initial contract has been signed. While these laws vary in their scope, all of them address the termination of a franchise agreement, and all but one (Virginia) address contract renewal. Other areas covered to varying degrees include the transfer of a franchise, encroachment, the purchase of goods or services from designated sources of supply, franchisees’ right to associate with other franchisees, and forum selection. Regardless of whether or not a state has a law that specifically covers the franchise relationship, franchisees always have the right to file a civil lawsuit against a franchisor for any contractual disputes. Many states have a “little FTC Act” (modeled after the FTC Act) or some type of general consumer protection or fraud statute that franchisees can use to address contractual disputes. These statutes are referred to in different states, for example, as consumer protection acts, consumer sales acts, deceptive trade practices acts, and consumer fraud acts. The states’ franchise relationship laws and other consumer protection or fraud statutes generally allow franchisees to file lawsuits in state court against franchisors for violations of these state laws. To gain a better understanding of franchise relationship issues at the state level, we reviewed Iowa’s franchise relationship law and interviewed Iowa officials involved in enacting the law. Iowa’s law is recognized by franchise trade officials as being the most comprehensive of all the states. Iowa’s franchise relationship law includes provisions that prohibit franchisors from terminating a franchise without good cause and at least 30 days prior written notice; refusing to renew a franchise unless the franchisor has provided 6 months written notice of nonrenewal and either good cause exists or certain circumstances exist, such as the franchisor completely withdraws from the market served by the franchisee; rejecting a proposed transfer of a franchise unless the proposed transferee fails to meet the franchisor’s reasonable current qualifications for new franchisees and such rejection is not arbitrary or capricious; and requiring that franchisees purchase goods or supplies exclusively from the franchisor or designated sources when goods and supplies of comparable quality are available from other sources. According to officials we met with in Iowa, the most contentious part of Iowa’s franchise relationship law relates to encroachment. In general, the law provides franchisees a cause of action to recover monetary damages if a franchisor (1) develops, or grants a franchisee the right to develop, a new franchise outlet in unreasonable proximity to the existing franchisee’s outlet and (2) the new outlet has an adverse effect on the gross sales of the existing franchisee’s outlet. An Iowa state senator who played a key role in enacting Iowa’s franchise relationship law told us he was unaware of any data on the extent of franchise relationship problems in Iowa. Rather, he noted that Iowa’s law was initially passed following an Iowa legislature study of franchise regulation, which included testimony and other statements made by proponents and opponents of franchise legislation. The senator added that the primary reason why Iowa got involved in regulating franchise relationship issues was because of a provision in franchise agreements requiring franchisees operating in Iowa to settle disputes and file lawsuits outside of Iowa. Under Iowa’s law, a provision in a franchise agreement requiring franchisees who are located in Iowa to go to other states to settle disputes and file lawsuits is unenforceable. The investigative process under FTC’s Franchise Rule involves four major phases: (1) receiving complaints and inquiries about franchisor actions, (2) performing preliminary screens of complaints, (3) conducting investigations, and (4) taking legal actions against franchisors or closing the investigations without taking any legal actions against the franchisors. FTC may begin investigations based on information from external sources, such as consumer complaints, or from internal actions, such as FTC- initiated inquiries. Investigations may result in such actions as FTC filing, through the Department of Justice (DOJ), a consent decree or a complaint in court that may lead to an eventual judicial action against a franchisor or closing the investigation without taking any further action. FTC typically considers a number of factors to determine whether it will open an investigation. According to FTC staff, many investigations stem from business opportunity sweeps, reviews of newspaper advertisements, Internet research, or other internal FTC case generation activities. On the basis of these factors, as well as application of its criteria for screening complaints, most complaints FTC receives are not investigated. According to FTC staff, the factors FTC consider are as follows: The type of problem alleged. In reviewing a business opportunity or franchise complaint, FTC typically determines first whether the complaint alleges violation of a law enforced by FTC. Many complaints do not constitute violations of any laws enforced by FTC. For example, (1) the franchisor has breached its franchise agreement, (2) the franchisee is dissatisfied with the quality of goods offered for sale, or (3) the franchisee is dissatisfied with the investment and wants to seek a refund. Generally, these problems do not constitute federal law violations, and enforcement by FTC is not warranted. The level of consumer injury and the number of consumers affected. Because FTC’s resources are limited, it seeks to focus on those complaints that will “accomplish the greatest good for the greatest number of consumers.” Accordingly, as a matter of policy, FTC generally does not pursue individual consumer complaints or intervene in disputes between individual franchisees and franchisors. Rather, FTC focuses on those companies that exhibit a pattern or practice of violations nationwide. The likelihood of preventing future unlawful conduct. FTC may also consider the likelihood that any enforcement action will prevent future unlawful conduct. For example, where would-be defendants are out-of- business, enforcement of the law would be futile. The likelihood of securing redress or other relief. FTC typically considers whether a law enforcement action will result in securing redress or other relief. In this regard, FTC considers the viability of law enforcement action, the financial status of the business opportunity seller or franchisor, and any potential injury to existing franchisees. Additional law enforcement considerations. FTC may consider several additional factors, such as whether (1) the problem can be addressed at the state level, (2) individuals can remedy the problem on their own under existing state laws, and (3) there are serious law violations that can result in substantial consumer injury. FTC typically considers a number of factors to determine which cases it will pursue through the courts. Some of these criteria are the same factors FTC uses in deciding to open an investigation. For example, among the factors FTC first determines are whether (1) there is an allegation of a violation of law enforced by FTC, (2) the alleged violation is within the applicable statute of limitations, and (3) there is a pattern or practice of such problems. If these factors can be established, FTC can then apply more specific case selection criteria, which include the following: The viability of law enforcement action. FTC considers such factors as whether (1) the alleged violations are close to the statute of limitations; (2) witnesses can be located, and if so, how cooperative they will be; and (3) evidence is available and sufficient to demonstrate that a law violation occurred. The viability of a meaningful remedy. FTC considers such factors as (1) whether the company has any assets that could be used to compensate those harmed or pay civil penalties and (2) what the deterrent effect on the company would be. Alternatives to federal intervention. FTC considers such factors as whether (1) the franchisee(s) can sue under state law and (2) the matter is appropriate for referral to state authorities or to the NFC’s Alternative Rule Enforcement Program. Consumer education (1,400,000 suspended) 1,040,000 (suspended) (suspended) Disgorgement (1,225,000) 1,900,000 (suspended) 536,000 (510,000 suspended) 5,000,000 (suspended) Ban, Bond 146,750 (settlement) 3,253,000 (default) 4,000,000 (suspended) Legend Y = Yes N = No P = Pending Company out of business. Misrepresentations about potential earnings and references Misrepresentations about potential earnings, level of necessary effort Misrepresentations about potential earnings, availability of referrals-clients; omissions about unauthorized practice of law, limited passage rate on qualifying tests Misrepresentations about potential earnings, support services Misrepresentations about potential earnings, demand for products Misrepresentations about potential earnings; affiliation with U.S. government; refund policy; nature of program Misrepresentations about potential earnings and exclusive territories Misrepresentations about independence and reliability of reports provided to prospective franchisees Misrepresentations about potential earnings; violations of Cooling Off Rule Misrepresentations about potential earnings; availability of work; refund policy Misrepresentations about false earnings, success, and testimonials Failure to disclose truthful information about existing franchisees; failure to provide earnings claims documents; making inconsistent statements; misrepresentations about references, potential earnings, prior success, advertising expenses, necessary experience, omissions about use of services and licensing requirements Rule compliance; failure to provide an earnings claims document; misrepresentations about potential earnings; that purchasers would receive a “turn-key” business with initial and ongoing support Rule compliance; failure to provide earnings claims document and to comply with Rule’s earnings claims requirements; misrepresentations about product reliability and benefits (insurance discounts) Description of alleged violations Rule compliance; failure to furnish an earnings claims document; misrepresentations about potential earnings; maintenance requirements; and references Rule compliance; failure to provide earnings claims document; Misrepresentations about potential earnings, locators’ success, discount prices, and exclusive territories Violations of previous court order; unsubstantiated earnings claims; failure to disclose turnover rate information. Criminal contempt for violating court order prohibition Rule violations and section 5 During 1998-2000, eight Franchise Rule matters were referred to NFC’s Alternative Rule Enforcement Program. Action Games Technologies, Inc.; Allstates Leasing, Inc.; American Manufacturing Industries, Inc.; Burger Quik, Inc.; Coin Management, Inc.; Corporate Travel Services, Inc.; DBJ I, Inc.; DLW Distributors; Entertainment Enterprises, Inc.; GBC Enterprises, Inc.; E-Z Vend; Kick Start; Multi Vend; Research America; Snack Vending USA; Sun & Fun Vacation Club; Vend-A-Nutt; Honor America, Inc.; Indoor Amusement Games, Inc.; Jameson & Adams, Inc.; Magnum Vending Corp.; North American Pharmaceutical, Inc.; TV Ventures; Northwest Marketing, Inc.; Cascade Vending and/or Quick Vend; Novelty Plush, Inc.; Debbie’s Amusements; Prizes Unlimited; Olympic Entertainment, Inc.; Olympic Games International; Omni Investors Group, Inc.; Omni Marketing Group, Inc.; Outreach America, Inc.; Juice De Lite; Raks-4-Kids; Pizza King, Inc.; Family Entertainment; Pizza Royale, Inc.; Project America, Inc.; R&J Vending, Inc.; S&M Manufacturing Corporation; S&M Industries, Inc.; Treat Vendor, Inc.; U-Vend, Inc.; Boca Amusements; United Capital, Inc. FTC communicates information and coordinates enforcement activities with state business opportunity and franchise regulatory officials through various means, including annual law enforcement summits, joint FTC-state enforcement actions, monthly telephone conference calls, and the Consumer Sentinel complaint database. FTC staff commented that by sharing information and resources, joint efforts effectively target issues that have direct impact on consumers. To gather information on the effectiveness of FTC’s efforts to communicate information and coordinate enforcement activities with state regulatory officials from calendar year 1998 through 2000, we contacted the eight business opportunity and nine franchise regulatory officials in the nine states that have both business opportunity and franchise disclosure laws to obtain their views on the effectiveness of FTC’s efforts to communicate and coordinate enforcement activities in their states, and we received responses from all of them. The survey results showed that state business opportunity regulatory officials tended to view FTC’s communication and coordination efforts as being more effective than did the state franchise regulatory officials. FTC communicates information and coordinates enforcement activities with state business opportunity and franchise regulatory officials through various means. The sections that follow provide information on the means of communication FTC has used in recent years. Since 1995, FTC and NASAA have jointly sponsored annual franchise and business opportunity law enforcement summits. According to FTC staff, the summits provide a vehicle for FTC and state business opportunity and franchise regulatory officials to communicate and coordinate law enforcement priorities for the coming year. Summit participants have included representatives from state agencies responsible for business opportunity and franchise issues, including Offices of State Securities Commissioners, Attorneys General, and other law enforcement agencies. These summits cover such issues as improving FTC-state working relationships, trends in the business opportunity and franchising industries, and planning joint FTC-state enforcement actions. FTC periodically conducts joint investigations and sweeps with state and federal law enforcement officials. From 1995 through 2000, FTC conducted five joint sweeps that included participants from the Department of Justice, as well as selected state agencies responsible for business opportunity and franchise enforcement issues. These five sweeps resulted in 45 FTC cases filed, 44 DOJ cases filed, and 163 state enforcement actions. All five sweeps involved business opportunities. According to FTC staff, the types of problems found with franchises— such as the lack of proper disclosure—do not generally lend themselves to sweeps. Table 7 provides further information on the five FTC-state coordinated sweeps conducted from 1995 through 2000. Since 1995, FTC has held monthly telephone conference calls with various state business opportunity and franchise regulatory officials to exchange information and discuss ongoing and prospective enforcement actions. FTC staff said that 25 to 30 state agencies usually participate in these conference calls. According to FTC staff, the conference calls focus on improper patterns or practices that the participants have uncovered in performing their enforcement functions. The FTC staff added that since most of the complaints and problems that are brought to the attention of the participants involve business opportunities, the conference calls generally do not involve discussions of franchise enforcement issues. Consumer Sentinel is an on-line central repository for consumer complaints relating to consumer and Internet fraud and identity theft, maintained by FTC’s Division of Planning and Information. According to FTC staff, Consumer Sentinel is also a vehicle for sharing information with state law enforcement agencies concerning business opportunity and franchise complaints, investigations, and court cases. More than 250 federal, state, local, and international law enforcement agencies have direct online access to Consumer Sentinel data; however, FTC cannot easily determine the extent to which state agencies actually use this resource. FTC staff commented that Consumer Sentinel capabilities enhance their ability to promote communication and joint enforcement actions with agencies. For example, Consumer Sentinel users can be alerted if other users have information on a company or type of scheme by submitting an on-line “alert” form. Consumer Sentinel also allows users to receive periodic updates, based on their specific search criteria, and also obtain contact information on any Consumer Sentinel law enforcement member. In addition to its own sponsored events, FTC participates in NASAA’s Franchise and Business Opportunity Project Group throughout the year. The project group focuses on improving franchise disclosure requirements and improving communication among states and FTC concerning franchise and business opportunity enforcement actions. The project group consists of FTC’s Franchise Rule Coordinator and state regulatory officials who serve on a rotating basis. The project group provides an electronic mail service for NASAA members to exchange information on complaints and investigation matters. The chair of the project group stated that by working together with FTC, member states have the opportunity to participate in more (and more creative) actions than the states would normally have the resources to undertake. The chair added that FTC’s involvement in the project group has been an important tool for discussing franchise issues since FTC’s monthly telephone calls primarily focus on business opportunity issues. Our survey of business opportunity and franchise regulatory officials in those states that have both franchise and business opportunity disclosure laws showed that state business opportunity regulatory officials tended to view FTC’s communication and coordination efforts as being more effective than did the state franchise regulatory officials. State business opportunity officials generally believed that FTC’s communication and coordination efforts were effective. The state officials found the joint FTC-state enforcement actions (e.g., sweeps and investigations) and informal communication (e.g., electronic mail, telephone calls, and faxes) to be the most effective. Table 8 provides further information on the eight state business opportunity officials’ views of the effectiveness of FTC’s various communication and coordination efforts from 1998 through 2000. State franchise regulatory officials generally believed that FTC’s communication and coordination efforts were less effective than their business opportunity counterparts. The difference in opinion may be due, at least in part, to the fact that many of the state franchise officials had not participated in many of the events or used FTC’s database. The state officials found the annual law enforcement summits to be the most effective communication and coordination activity used by FTC. Table 9 provides further information on the nine state franchise officials’ views of the effectiveness of FTC’s communication and coordination efforts from 1998 through 2000. The franchise trade associations we contacted provided divergent views on the need for federal legislation on franchise relationships. Proponents of federal legislation maintain, among other things, that legislation is needed to address the franchisees’ relative lack of bargaining power in the franchise relationship and contend that current federal and state pre-sale disclosure laws and state franchise relationship laws are ineffective in addressing franchise relationship issues. Opponents, however, maintain that franchise relationships are matters of contract law that should be addressed at the state level and contend that pre-sale disclosure is the best way to protect prospective franchisees. The following sections provide more specific information on the views of the American Franchise Association (AFA)—a leading proponent of federal franchise relationship legislation—and the International Franchise Association (IFA)—a leading opponent of such legislation. According to AFA officials, the gross disparity in financial strength and legal power between franchisors and franchisees has led to increasingly onerous contracts and problems in franchise relationships. The officials explained that it is their view that franchise contracts are increasingly heavy-handed and oppressive to the degree that they would not be seen as commercially reasonable in any other context. The officials believe that these contracts are, in fact, creating a barrier to small business entrepreneurs entering retail businesses. AFA officials told us that the biggest problem with franchise contracts is that franchisors reserve to themselves absolute decision-making power over a wide variety of matters during the entire term of the contract. The officials explained that a prospective franchisee may do his or her due diligence, investigate the system, talk to franchisees, and be comfortable in signing the current franchise agreement. The officials noted, however, that most franchise agreements allow the franchisor to materially and unilaterally make changes to the franchise relationship, which can significantly alter the economic conditions for franchisees. They stated that these wholesale changes are made during the term of the franchise agreement through the prevalent use of operations manuals that franchisors reserve the right to amend at any time. The officials added that even more extensive changes are made when the agreement is up for renewal or when the franchise business is being sold. According to AFA officials, common examples of contract provisions that give rise to such changes are the franchisor’s reserving the right to increase advertising or royalty fees or impose assessments; ability to change the operating policy manual, which can encompass fundamental financial and capital requirements and with which the franchise agreement obligates the franchisee to comply; ability to place additional locations in close proximity to an existing franchisee (encroachment); ability to distribute products and services through alternative modes of distribution (e.g., direct-shipping of products through catalogues, the Internet, and alternate retailers) and/or another brand name; reserving the right to be the sole supplier of goods and services used or sold from the franchisee’s business, often charging above-market prices to their captive franchisees; and option to purchase the business when the franchise agreement has expired or is terminated with the provision that the sale price will not be fair market value, but the depreciated value of assets or other such formulas that wholly deny the franchisee the ability to enjoy the fruits or his/her labor. AFA officials told us that, while these types of unilateral actions may increase a franchisor’s overall revenues, they can significantly impact a franchisee’s profitability and the value of the business. The officials added that some of the unilateral changes to franchise relationships involve issues that no franchisee could have anticipated upon the initial signing of the contract. In other words, they said that a franchisee may be bound by changes to the relationship that, had they known, they never would have signed the agreement in the first place. AFA officials also told us that some franchise agreements do not allow for contract renewal at all, and if they do, provide that it will be “according to the then current and materially different terms and conditions.” They added that there is nothing in these provisions that say these terms and conditions will be “commercially reasonable” or any other provision for basic fairness. Further, the officials noted that the “patchwork quilt” of federal and state pre-sale disclosure laws and state franchise relationship laws does not effectively address problems in the franchise relationship. According to AFA officials, since FTC staff maintain that FTC generally lacks the authority to intervene in private franchise contracts and related relationship issues, AFA members feel they have no alternative but to seek a legislative solution to their problems. AFA believes that federal franchise relationship legislation is needed to address what they consider to be the franchisors’ pervasive misuse of power and to alleviate the inconsistent treatment of franchisees within the states. As such, AFA was a primary proponent of the Small Business Franchise Act of 1999 (H.R. 3308), as introduced in the 106th Congress, which (1) proposed minimum standards of conduct in franchise business relationships and (2) addressed other aspects of the franchise relationship, including contract renewals, terminations, and transfers; the location of new franchises in relation to existing franchises; the purchase of goods or services from sources other than the franchisor; and franchisees’ rights to associate with other franchisees. The bill also provided franchisees with the right to file private civil lawsuits for violations of the act. AFA officials maintain that even if most or many franchisors do not abuse their position and power, effective federal standards are still needed to discourage franchise abuses. According to IFA officials, franchising works because entrepreneurs benefit from the flexibility to structure franchise relationships in the manner that works best for their product, service, or industry. The officials noted that franchise agreements must reserve to the franchisor effective rights to impose discipline on the network in order to (1) ensure a uniform look and quality for the product or service offered by the franchise, (2) maintain system standards for the benefit and value of both the franchisor and the great majority of its franchisees who voluntarily comply with such standards, and (3) protect the consumer from unsafe or otherwise substandard outlets. IFA officials also told us that franchisor-imposed changes to the franchise relationship are in the nature of fine-tuning—such as adding a new menu item, initiating a new safety procedure, upgrading software, and the like— and do not affect the terms and conditions of the franchise agreement. In short, IFA officials said that while franchisors reserve decision-making power over a wide variety of matters during the course of the franchise relationship, that control is what creates value in the form of a uniform brand, market penetration, and customer loyalty—reasons why franchisees invest in the first place. The officials added that the franchisor’s control over network operations is addressed in the disclosure document that is provided to prospective franchisees before they enter into the franchise relationship. IFA officials told us that current pre-sale disclosure requirements strike the right balance between legitimate consumer protection and overregulation. The officials noted that pre-sale disclosure laws are the most effective means by which to ensure productive and successful franchise relationships. In particular, they believe that disclosures of (1) current and past litigation involving the franchise system and (2) the names and addresses of both current franchisees, as well as those franchisees who have left the system within the past fiscal year, should provide any franchise investor with the resources necessary to ascertain the prevalence of relationship issues in a particular franchise system. According to IFA officials, three primary concerns have guided members of the association in their decision to oppose federal and state franchise relationship legislation. Many duties and obligations contained in franchise relationship legislative proposals are undefined or ambiguous, which would create confusion and uncertainty in franchise relationships and touch off an unprecedented increase in litigation. This would result in increased operating costs for franchise companies, the majority of which are small businesses that are not in a position to absorb these additional costs. Franchising is a source of economic opportunity and empowerment for women, minorities, and future generations of small-business owners. Franchise relationship legislation would discourage franchise growth and, as a result, have a disproportionate impact on these groups. It is virtually impossible to craft a “one size fits all” solution to the wide variety of franchise business practices involving companies operating in about 75 different industries. There is no common “relationship” legislation that can practically and predictably apply to these many different industries, operating in many different geographical markets, and at many different levels of system maturity and market penetration. Regarding the latter, IFA officials explained that because franchising is not an industry—but rather a method of distributing goods and services that is utilized by about 75 different industries—”one size fits all” legislation such as the Small Business Franchise Act of 1999 (H.R. 3308) and similar franchise relationship proposals are impractical and unworkable. The officials noted that such legislation contemplates that all franchised concepts and all franchise relationships can be regulated with a uniform law. The officials added that this view of franchising is flawed because it fails to recognize the fundamental difference between business format franchising—a concept that is employed by many heterogeneous businesses operating in a wide variety of dissimilar industries—and other forms of product distribution that are utilized by a very few homogeneous businesses operating in a single industry (such as automobile dealers or petroleum marketers). For these reasons, among others, IFA officials believe that it is inappropriate to make comparisons between proposals to regulate business format franchising and laws that govern manufacturing and distribution relationships such as the Automobile Dealers Day in Court Act or the Petroleum Marketing Practices Act. The officials added that there are virtually no barriers to entry to creating a franchised business, and with very few exceptions, business format franchises do not manufacture products for redistribution by their franchisees. As a result, the franchise relationship is very different from manufacturer-dealer or distributor relationships. IFA officials told us that federal legislative proposals, such as the Small Business Franchise Act of 1999, cede too much power to the government and the courts to alter the intent of the parties that have entered into a contract. The officials added that allowing interference in the contract process would severely impair the interpretation of those agreements. The officials also told us that the “minimum standards of fair conduct” contained in legislative proposals would materially alter provisions of existing state law and reverse numerous decisions establishing common law rights and obligations. IFA officials believe that to the extent there are differences between parties in franchising, those differences should be resolved through expanded forms of self-regulation, such as the IFA Ombudsman program, the National Franchise Mediation Program, the IFA’s Franchise Basics and Franchise Sales Compliance educational programs, and the IFA Code of Ethics and enforcement mechanism.
Franchises are business arrangements that require payment for the opportunity to sell trademarked goods and services. Business opportunity ventures do not involve a trademark, but require payment for the opportunity to distribute goods or services with assistance in the form of locations or accounts. The Federal Trade Commission's (FTC) Trade Regulation Rule on Franchising and Business Opportunity Ventures (Franchise Rule) requires franchise and business opportunity sellers to disclose financial and other information to prospective purchasers before they pay any money or sign an agreement. In addition, FTC enforces section 5 of the FTC Act, which addresses unfair or deceptive acts or practices. Over the past several years, Congress has debated the need for a federal statute to generally regulate franchises, including issues that arise between franchisors and franchisees after the franchise agreement is signed. Much of the debate centers on the relative bargaining power franchisees have when dealing with their franchisors over various issues, such as the location of new franchised outlets or the termination of franchise relationships without good cause and advance, written notice. This report reviews FTC's enforcement of its Franchise Rule and discusses various franchise relationship issues. GAO found that FTC has focused most of its Franchise Rule enforcement resources on business opportunity ventures because, according to FTC staff, problems in this area have been more pervasive than problems with franchises. The extent and nature of franchise relationship problems are unknown because of a lack of readily available, statistically reliable data--that is, the data available are not systematically gathered or generalizable. Absent such data, opinions varied as to the need for a federal statute to regulate franchise relationships. If Congress believes it needs empirical data before considering franchise relationship legislation, it could commission a study that would (1) design and implement an approach for collecting empirical data on the extent and nature of franchise relationship problems and (2) examine franchisor and franchisee experiences with existing remedies for resolving disputes.
To assist in integrating state and federal responses to domestic emergencies, the Homeland Security Council developed 15 national planning scenarios in 2004 whose purpose was to form the basis for identifying the capabilities needed to respond to a wide range of emergencies. The scenarios focus on the consequences that federal, state, and local first responders may have to address, and they are intended to illustrate the scope and magnitude of large-scale, catastrophic emergencies for which the nation needs to be prepared. These include a wide range of terrorist attacks involving nuclear, biological, and chemical agents, as well as catastrophic natural disasters, such as an earthquake or hurricane. The Department of Homeland Security (DHS), which was established in 2002 to, among other purposes, reduce America’s vulnerability to terrorism, is the lead federal agency responsible for preventing, preparing for, and responding to a wide range of major domestic disasters and other emergencies. Then-President George W. Bush designated DHS and its Secretary as the lead federal representative responsible for domestic incident management and coordination of all- hazards preparedness. In 2008, DHS issued its National Response Framework, which provides a framework for federal, state, and local agencies to use in planning for emergencies and establishes standardized doctrine, terminology, processes, and an integrated system for federal response activities. Overall coordination of federal incident-management activities, other than those conducted for homeland defense, is generally the responsibility of DHS. Within DHS and as the executive agent for the National Preparedness System, FEMA is responsible for coordinating and integrating the preparedness of federal, state, local, tribal, and nongovernmental entities. Response to disasters or other catastrophic events in the United States is guided by the National Response Framework and is based on a tiered response to an incident; that is, incidents must be managed at the lowest jurisdictional levels and supported by additional response capabilities as needed (see fig. 1). Local and county governments respond to emergencies daily using their own resources and rely on mutual aid agreements and other types of assistance agreements with neighboring governments when they need additional resources. For example, county and local authorities are likely to have the resources needed to adequately respond to a small- scale incident, such as a local flood, and therefore will not request additional resources. For larger-scale incidents, when resources are overwhelmed, local and county governments will request assistance from the state. States have capabilities, such as the National Guard, that can help communities respond and recover. If additional resources are required, the state may request assistance from other states through interstate mutual aid agreements, such as the Emergency Management Assistance Compact. If an incident surpasses community and state capabilities, the governor can seek federal assistance. The federal government has a wide array of capabilities and resources that can be made available to assist state and local agencies to respond to incidents. In accordance with the National Response Framework and applicable laws including the Robert T. Stafford Disaster Relief and Emergency Assistance Act (Stafford Act) various federal departments or agencies may play primary, coordinating, or supporting roles, based on their authorities and resources and the nature of the threat or incident. In some instances, national defense assets may be needed to assist FEMA or another agency in the national response to an incident. Defense resources are committed following approval by the Secretary of Defense or at the direction of the President. One of DOD’s missions is civil support, which includes domestic disaster relief operations for incidents such as fires, hurricanes, floods, earthquakes, National Special Security Events (for example, the opening of the United Nations General Assembly, or the Democratic and Republican National Conventions), counterdrug operations, and consequence management for CBRNE events. As noted earlier, DOD is not the primary federal agency for such missions (unless so designated by the President) and thus it provides defense support of civil authorities only when (1) state, local, and other federal resources are overwhelmed or unique military capabilities are required; (2) assistance is requested by the primary federal agency; and (3) either NORTHCOM or PACOM, the two combatant commands with responsibility for civil support missions, is directed to do so by the President or the Secretary of Defense. When deciding to commit defense resources, among other factors, defense officials consider military readiness, appropriateness of the circumstances, and whether the response is in accordance with the law. For example, the Posse Comitatus Act allows military forces to provide civil support, but these forces generally cannot become directly involved in law enforcement. When they are called upon to support civil authorities, NORTHCOM and PACOM generally operate through established joint task forces that are subordinate to the command. In most cases, support will be localized, limited, and specific. When the scope of the disaster is reduced to the point where the primary federal agency can again assume full control and management without military assistance, NORTHCOM and PACOM will exit. DOD established the Office of the Assistant Secretary of Defense for Homeland Defense and Americas’ Security Affairs to oversee homeland defense and civil support activities for DOD, under the authority of the Under Secretary of Defense for Policy, and, as appropriate, in coordination with the Chairman of the Joint Chiefs of Staff. This office develops policies, conducts analysis, provides advice, and makes recommendations on homeland defense, defense support of civil authorities, emergency preparedness, and domestic crisis-management matters within the department. The Assistant Secretary assists the Secretary of Defense in providing policy directions to NORTHCOM and other applicable combatant commands to guide the development and execution of homeland defense plans and activities. This direction is provided through the Chairman of the Joint Chiefs of Staff. This office is also responsible for coordinating with DHS. While most of the National Guard’s roles and responsibilities in the disaster-response area are not federal ones, the Chief of the National Guard Bureau is a principal advisor to the Secretary of Defense on matters involving nonfederalized National Guard forces. In this role, the National Guard Bureau provides NORTHCOM, PACOM, and other DOD organizations with information on National Guard capabilities available in the states for disaster response so that DOD can better anticipate what, if any, additional capabilities it may be asked to provide. The process whereby DOD provides capabilities to assist civil authorities has changed over the past 5 years. In 2004, a series of four hurricanes struck Florida, and DOD received a large number of civil requests-for- assistance that all had to be approved by the Secretary of Defense. DOD and others concluded that the process was time-consuming and complicated. To streamline the process, the Joint Staff developed operational guidance for DOD commands—referred to as an Execute Order—modeled after the Execute Order for Operation Noble Eagle, the North American Aerospace Defense Command’s activities to defend American skies begun in response to the September 11, 2001, terrorist attacks. A standing Defense Support of Civil Authorities Execute Order has been several times, but an important purpose has been to pre-identify forces that NORTHCOM and PACOM can request from the Secretary of Defense in the event of a disaster. The Execute Order places DOD capabilities into four categories. Category 1 comprises capabilities assigned to the combatant command (that is, the Defense Coordinating Officer and staff, service component command staff, command and control personnel, and communication capabilities). Category 2 comprises pre-identified capabilities, such as helicopters for rapid area assessments, C-130 aircraft that can refuel helicopters, and capabilities for search and rescue, that NORTHCOM and PACOM can place on 24-hour prepare-to- deploy status after notifying the Joint Chiefs of Staff and the Secretary of Defense. Category 3 comprises capabilities for DOD use (for example, combat camera, or public affairs). Category 4 comprises large-scale response forces (rarely used except for large-scale disasters such as Hurricane Katrina). Finally, local installation and unit commanders have the authority to respond to localized events as requested by local civilian authorities. These responses, conducted under immediate response authority, do not normally exceed 72 hours and require notification of the relevant service commands as well as the Secretary of Defense. Additionally, local installations may establish mutual aid agreements for things such as fire and ambulance support with the communities surrounding their installations. NORTHCOM and PACOM are not involved in either of these responses. However, depending on the nature of the local incident, including possibility of media involvement, NORTHCOM and PACOM may receive a spot report regarding the local incident as part of the process of informing DOD senior leadership. NORTHCOM is the unified military command responsible for planning, organizing, and executing DOD’s homeland defense and federal military support to civil authorities’ missions within the continental United States, Alaska, and U.S. territorial waters. PACOM has these responsibilities for the Hawaiian Islands and U.S. territories in the Pacific. Both combatant commands receive support from a variety of commands and organizations in their direct chain of command and throughout DOD. Table 1 shows examples of these commands. As part of the lessons learned from Hurricane Katrina, NORTHCOM has placed a Defense Coordinating Officer with associated support staff, known as the Defense Coordinating Element, in each of FEMA’s 10 regional offices, placing greater emphasis on the Defense Coordinating Officers’ mission. Figure 2 shows the 10 FEMA regions. Prior to October 1, 2006, the Defense Coordinating Officers had full-time jobs commanding training units for the First and Fifth Continental U.S. Armies. The Defense Coordinating Officers, along with their 40-person training staff, served part-time as Defense Coordinating Officers and only did so when requested by FEMA or another federal agency. Upon establishment of Fifth U.S. Army as the Army component to NORTHCOM, 10 full-time regional Defense Coordinating Officers were established and located in the FEMA regional offices. Defense Coordinating Officers are senior-level military officers (typically Army colonels) with joint experience and training on the National Response Framework, defense support of civil authorities, and DHS’s National Incident Management System. They are responsible for assisting the primary federal agency when requested by FEMA; they provide liaison support and requirements validation; and they serve as single points of contact for state, local, and other federal authorities that need DOD support. Defense Coordinating Officers work closely with federal, state, and local officials to determine what unique DOD capabilities can be used to assist in mitigating the effects of a natural or man-made disaster. Since FEMA region IX is split between NORTHCOM and PACOM, NORTHCOM has a Defense Coordinating Officer assigned to the FEMA regional office in California and PACOM has established two Defense Coordinating Officers within its area of operations. Currently, there is a Navy civilian Defense Coordinating Officer for Guam and the Northern Mariana Islands and a part-time, Army Reserve Defense Coordinating Officer for Hawaii and American Samoa. D.C. U.S. Additionally, the military services have Emergency Preparedness Liaison Officers. These are senior Reserve officers (typically colonels or Navy captains) from the Army, Navy, Air Force, and Marine Corps who represent the federal military in each of the 10 FEMA regional offices and in the states and territories. While they have some service-specific responsibilities, Emergency Preparedness Liaison Officers’ civil support responsibilities include assisting the Defense Coordinating Officers with service subject-matter expertise and coordinating the provision of military personnel, equipment, and supplies to support the emergency relief and cleanup efforts of civil authorities. DOD planning documents for its civil support mission require that DOD maintain continuous situational awareness of its civil support operating environment by identifying shortfalls in capabilities, planning, exercising, and coordinating DOD efforts with its interagency partners. Further, in its Vision 2020 statement, NORTHCOM identifies a strategic goal of providing timely and effective civil support by anticipating requests for support and providing military capabilities at the right place and the right time. Accordingly, at the direction of the Deputy Secretary of Defense and in response to a request from the Assistant Secretary of Defense for Homeland Defense and Americas’ Security Affairs, NORTHCOM agreed to lead a department-wide, capabilities-based assessment for DOD’s homeland defense and civil support missions. The strategic goals of the effort were to enable improvement in DOD homeland defense and civil support policy, evaluate existing DOD capabilities and identify DOD capability gaps, improve DOD’s integration with interagency mission partners, and recommend further action to promote future capability development for the homeland defense and civil support missions. The Deputy Secretary of Defense identified the capabilities-based assessment as one of DOD’s top 25 transformational priorities to be completed or advanced to a major milestone by December 2008. DOD conducted the assessment between September 2007 and October 2008 in accordance with DOD processes. DOD agencies, the combatant commands, the military services, the National Guard Bureau, DHS, and other key federal interagency partners participated in the assessment. The assessment did not include participants from state and local governments. The recently completed capabilities-based assessment identified 31 capability gaps for DOD’s homeland defense and civil support missions. The 31 capability gaps were derived from an initial identification of 2,192 capabilities, tasks, and statements of required activity that define and describe the homeland defense and civil support missions. According to our analysis, the assessment identified 14 capability gaps related to the civil support mission, 4 of which are CBRNE or law enforcement related, and 17 gaps related to the homeland defense mission or mission assurance function. The 10 civil support gaps related to natural disasters were: Common Operational Picture, Operational Intelligence Analysis and Dissemination, Information Management and Sharing, DOD Interagency Planning, DOD Interagency Operations, DOD Transportation Support, Mass Care Support, Assured Access to Electromagnetic Spectrum, Logistical Health Medical Support, and Isolation and Quarantine Support. The capabilities-based assessment was limited in that (1) the nature of its assumptions may have hidden other capability gaps and (2) DOD has not received precise information from civil authorities on the capabilities it will be asked to provide. First, one of the strategic assumptions guiding the capabilities-based assessment is that DOD will provide a total force (combined active and reserve component) response to support civil authorities for domestic emergencies and other activities as directed. However, as we have reported in prior work and raised as a matter for congressional consideration, DOD has no legal authority to order Reserve personnel to involuntary active duty service for the purpose of providing civil support in the response to a natural disaster, which may limit DOD’s ability to provide the capabilities requested by civil authorities in a timely manner. For example, according to U.S. Transportation Command officials, this lack of authority has made it difficult to access the personnel it needs to perform its civil support operations, especially since about 88 percent of DOD’s capabilities for aeromedical evacuation operations are assigned to the reserve component. U.S. Transportation Command officials said they have been able to rely on volunteers from the service Reserves to meet their civil support requirements thus far, but they noted that, in the event of multiple disaster requirements that overwhelm state capabilities, U.S. Transportation Command might not be able to provide the capabilities requested due to the lack of authority to order service Reservists to active duty service to respond to disasters. DOD officials we interviewed told us that the department has advocated a change to this legislative status, but that the states have opposed the change due to issues involving state sovereignty. Second, while the assessment provided a general discussion of the civil support capability shortfalls it identified, it concluded that a precise scope of many of these shortfalls could not be determined because several strategic policy questions remained unanswered. There is a lack of interagency understanding and agreement on the extent of capabilities requested by civil authorities that DOD is expected to provide, and on how quickly DOD is expected to provide them. For example, Emergency Support Function #8: Public Health and Medical Services Annex to the National Response Framework, requests that DOD provide support for evacuating seriously ill or injured patients, but it does not provide specifics on the amount of capabilities that DOD should provide, or the timeliness of DOD’s response for providing these capabilities. We previously reported that NORTHCOM has difficulty identifying requirements for capabilities it may need in part because NORTHCOM does not have more detailed information from DHS and the states on the specific requirements needed from the military in the event of a disaster. For DOD’s civil support mission, the requirements are established by the needs of the federal, state, and local agencies and organizations that DOD would be supporting in an actual event. In January 2008, the Commission on the National Guard and Reserves noted that DHS had not defined the requirements that DOD must meet to adequately perform its civil support mission. Several DOD officials we spoke with said that one of the biggest challenges in providing defense support of civil authorities is that civil authorities have not yet defined the capability requirements that DOD might be requested to provide in the event of a disaster. FEMA is responsible for establishing a comprehensive system to assess the nation’s prevention capabilities and overall preparedness. However, our prior work has shown that FEMA faces methodological and coordination challenges in completing the system and issuing required reports on national preparedness. DOD and DHS have undertaken some recent initiatives to address gaps in strategic planning that should assist DOD in identifying its capability requirements for the civil support mission. For example, during the course of our work, DOD and DHS were implementing the Integrated Planning System, which includes a process for fostering integration of federal, state, local, and tribal plans that allows for state, local, and tribal capability assessments to feed into federal plans. In conjunction with officials from federal, state, and local government as well as the private sector, DOD and DHS recently issued catastrophic plans for responding to and recovering from a category 4 hurricane in Hawaii. These plans were developed in accordance with the Integrated Planning System. DOD and FEMA officials in Hawaii with whom we spoke said that this was an important milestone because it represented the first time that DOD’s capability requirements had been identified and formally agreed to by interagency stakeholders. As another example, DHS has also established a Task Force for Emergency Readiness pilot initiative that seeks to integrate federal and state planning efforts for catastrophic events. Five states are currently participating in the initiative, and officials from the Office of the Assistant Secretary of Defense for Homeland Defense and Americas’ Security Affairs told us that the initiative should assist the states in identifying their capability requirements for catastrophic events, which in turn should assist DOD in determining the capabilities it may be asked to provide. As a third example, the National Guard Bureau recently completed an assessment of National Guard capabilities for domestic missions by conducting a series of regional war games. A major goal of the effort was to identify National Guard capability gaps and provide recommendations on how to address these gaps. DOD’s capabilities-based assessment highlighted a lack of alignment across DOD’s policies, strategy, and doctrine for its civil support mission, making it difficult to determine DOD’s capability requirements. We determined that this is due, in part, to outdated key policy directives. In many cases, DOD’s policy guidance does not reflect widely accepted terminology or the organizational structure that DOD has developed for providing assistance to civil authorities. For example, DOD Directive 3025.1, “Military Support to Civil Authorities,” which defines disaster response and outlines the responsibilities of the Joint Chiefs of Staff, Unified Commands, and other DOD components and military services that respond to a civil emergency, was issued in January 1993—almost 10 years prior to the establishment of NORTHCOM. DOD’s implementing guidance for this directive, 3025.1-M, “Manual for Civil Emergencies,” was issued in 1994 and DOD Directive 3025.15, “Military Assistance to Civil Authorities,” which establishes DOD policy for evaluating requests for disaster assistance, was issued in February 1997. This guidance further states that the Department of the Army is the DOD executive agent for military support to civil authorities, and is responsible for developing planning guidance, plans, and procedures on behalf of the Secretary of Defense. Since NORTHCOM’s creation, the 2008 Unified Command Plan and the Forces for Unified Command Memorandum state that both NORTHCOM and PACOM, through the Chairman of the Joint Chiefs of Staff, are responsible for providing support to civil authorities within their areas of responsibility. Moreover, a 2009 DOD directive, DOD Directive 5111.13, established the Assistant Secretary of Defense for Homeland Defense and Americas’ Security Affairs as the principal advisor to the Secretary of Defense for DOD’s civil support mission. The DOD policy directives are not aligned with DOD and national-level guidance in that they use outdated terminology. For example, the 1993 and 1997 DOD directives use the terms “military support” and “military assistance” to describe the types of support DOD provides to civil authorities, but DOD currently uses the term “defense support of civil authorities.” The latter term has been widely accepted by the defense community and is part of current strategy, doctrine, and plans, including the Strategy for Homeland Defense and Civil Support, as well as interagency documents, such as the National Response Framework. DOD is considering a new draft directive for defense support of civil authorities that will supersede the old policy directives and provide overarching policy guidance for its civil support mission. However, the draft directive has been under review for about 4 years and has yet to be finalized. According to officials from the Office of the Assistant Secretary of Defense for Homeland Defense and Americas’ Security Affairs, the draft directive has taken longer to finalize than expected because of the evolving nature of DOD’s civil support mission. These officials noted that defense support of civil authorities has been difficult to define because DOD’s civil support mission has shifted from a military service-centric to a more unified, joint effort, as exemplified by the establishment of NORTHCOM. The military services’ implementing guidance for DOD’s civil support mission, DOD 3025.1-M, is based on the DOD directives that were issued in 1993 and 1997, but DOD joint doctrine and planning documents reference the draft DOD directive. While DOD recognizes that there are circumstances in which new doctrine would influence policy, the normal progression is for policy to drive doctrine and thereby influence training and the conduct of operations. Thus, we note that incomplete DOD policy guidance for its civil support mission may lead to confusion and misunderstanding among the military services and other DOD components regarding the proper employment of defense capabilities in support of civil authorities. One of the chief examples of the confusion caused by DOD’s outdated policies and their lack of alignment with other published documents is the disparate perceptions of the components as to the importance of the civil support mission. According to the DOD homeland defense and civil support capabilities-based assessment, DOD strategy and joint doctrine recognize the department’s civil support mission, but DOD policy prohibits the DOD components from procuring or maintaining any supplies, materiel, or equipment exclusively for their civil support mission, unless otherwise directed by the Secretary of Defense. The capabilities-based assessment noted that some DOD components have interpreted this policy statement to signify that DOD does not program or budget for civil support capabilities. We found this view was prevalent among DOD officials we interviewed, even though DOD policy does not preclude DOD agencies from programming and budgeting for civil support capabilities—rather, it requires that they obtain direction from the Secretary of Defense to do so. Further, strategy and joint guidance also do not provide clarity about funding and priority of the civil support mission. The DOD Strategy for Homeland Defense and Civil Support states that DOD will maintain capabilities to assist civil authorities in responding to catastrophic incidents. However, while the strategy implies that DOD will program and budget for capabilities for responding to catastrophic incidents, it does not directly state this for the civil support mission. Additionally, Joint Publication 3-28, Civil Support, recognizes civil support as a DOD mission but states that civil support capabilities are derived from DOD warfighting capabilities that could be applied to domestic assistance and law enforcement support. The capabilities-based assessment concluded that lack of alignment across a range of policy, strategy, and doctrinal actions have made it difficult to develop and implement coherent recommendations regarding capabilities for DOD’s civil support mission. According to NORTHCOM and U.S. Transportation Command officials, these inconsistencies in policy, strategy, and doctrine and in DOD officials’ interpretation of them may limit DOD’s ability to pre-position forces and equipment for life-saving missions, such as aeromedical evacuations prior to a hurricane making landfall along the coastal United States. These officials cited the importance of pre-positioning forces, because aeromedical and patient evacuation operations are to be concluded no later than 18 hours before a major hurricane’s landfall. They said that it is difficult for DOD to spend money to alert the personnel who are needed to perform these missions. According to U.S. Transportation Command officials, DOD and FEMA have agreed on a prescripted mission assignment that would provide DOD with an estimated $986,388 in “surge” funding for these operations. However, U.S. Transportation Command officials said that additional funds are still needed to alert personnel and pre-position forces, and thereby ensure that they can perform the life-saving mission successfully. We also found that DOD has not fully exercised available funding authorities to support its civil support operations. Congress has established a Defense Emergency Response Fund to reimburse DOD for providing disaster or emergency assistance to other federal agencies and to state and local governments in anticipation of reimbursable requests. However, a June 2008 report from the DOD Inspector General found that DOD had not used any funds from this account for domestic disaster or emergency relief assistance since it was established in November 1989. An official from DOD’s Office of the Assistant Secretary of Defense for Homeland Defense and Americas’ Security Affairs acknowledged that the Defense Emergency Response Fund could be a source of funding but did not know why the fund has not been used for civil support operations. DOD guidance and the National Response Framework state that the Defense Coordinating Officer, when requested by civil authorities and approved by DOD, serves as the single point of contact for DOD at the FEMA regions, and is responsible for coordinating with federal and state authorities on the use of military capabilities for defense support of civil authorities. DOD Directive 3025.1 (1993), and the implementing guidance for this directive, 3025.1-M, “Manual for Civil Emergencies” (1994), define the roles and responsibilities of the Defense Coordinating Officers. According to this guidance, Defense Coordinating Officer responsibilities require knowledge of military capabilities and of how to access military assets to support validated requirements. As of 2006, DOD permanently assigned 10 full-time Defense Coordinating Officers, along with a full-time supporting staff known as the Defense Coordinating Element, to each FEMA region, and colocated all of them with the FEMA regional headquarters. FEMA officials we interviewed said that these actions have greatly improved coordination among DOD, FEMA, and other civil authorities; previously, they said, their understanding of DOD capabilities was limited because they had only infrequent contact with the Defense Coordinating Officers. These FEMA officials said that the Defense Coordinating Officers and Defense Coordinating Elements, especially the Defense Coordinating Element’s planners, have improved civilian authorities’ awareness of DOD’s capabilities by providing disaster planning expertise to civil authorities and by routinely participating in disaster exercises, planning conferences, and workshops throughout the FEMA regions. For example, they said, Defense Coordinating Officers have especially improved FEMA’s awareness of DOD’s logistical capabilities by informing FEMA about DOD installations and bases, located throughout the FEMA regions, that could be used as staging areas to pre-position commodities and supplies. Defense Coordinating Officers and Defense Coordinating Elements told us that having a full-time presence in the FEMA regions has allowed them to build effective relationships and establish trust with civil authorities. According to NORTHCOM officials, the Defense Coordinating Officers are a key means of gaining insight into civil authorities’ capabilities, thus assisting NORTHCOM in better anticipating civil support requirements. The military services’ Emergency Preparedness Liaison Officers assist the Defense Coordinating Officers in executing their civil support responsibilities. DOD Directive 3025.16, “Military Emergency Preparedness Liaison Officer (EPLO) Program” (2000), establishes DOD policy for the management of the Emergency Preparedness Liaison Officer program and creates additional points of contact within the military services for federal and state coordination of resources for emergency response. This policy directive states that the military services are responsible for ensuring that Emergency Preparedness Liaison Officers are trained and equipped to meet the requirements of DOD’s civil support mission. Additionally, DOD’s 3025.1-M, “Manual for Civil Emergencies,” establishes doctrinal procedures necessary for implementation of the Emergency Preparedness Liaison Officer program to provide civil support under DOD Directive 3025.1. It provides for the establishment of Emergency Preparedness Liaison Officer teams at the FEMA regions and states, and it defines the roles and responsibilities of the Emergency Preparedness Liaison Officers. Defense Coordinating Officers told us that the Emergency Preparedness Liaison Officers play a critical role in assisting them in day-to-day operations; in exercises that are designed to simulate a real-life disaster; and in disasters. For example, the Emergency Preparedness Liaison Officers routinely provide situational awareness at both the state and FEMA regional levels by participating in meetings, planning workshops, and conferences; by establishing relationships with federal and state disaster-management officials, including the National Guard; and by reviewing state and federal agency disaster plans. Several of the Defense Coordinating Officers told us that the Emergency Preparedness Liaison Officers are their key source of information on state capabilities. During exercises and actual disasters, the Emergency Preparedness Liaison Officers will deploy to the State Joint Force Headquarters, state emergency operation centers, Joint Field Offices, or FEMA’s Regional Response Coordination Centers and assist the Defense Coordinating Officer in validating requests-for-assistance. They provide the Defense Coordinating Officer with expertise on the capabilities that are available from their respective military services, and they serve as liaisons between the Defense Coordinating Officer and their military services, the federal agencies responsible for the Emergency Support Function activities, state emergency management officials, and National Guard officials. Almost all of the Defense Coordinating Officers indicated to us that the Emergency Preparedness Liaison Officers were important to a great or moderate extent for gaining knowledge of gaps in state disaster capabilities. DOD has not updated or clearly defined the roles and responsibilities of the Defense Coordinating Officers and Emergency Preparedness Liaison Officers that it has assigned to the FEMA regions, due to gaps in policy and guidance for its civil support mission. As we have previously stated, DOD has not updated its key policies and guidance for the civil support mission, namely DOD Directive 3025.1 (1993), or the implementing guidance for this directive, 3025.1-M, “Manual for Civil Emergencies.” This guidance continues to define the roles and responsibilities of the Defense Coordinating Officers and Emergency Preparedness Liaison Officers, even though significant changes have occurred in DOD’s command responsibilities and organizational structure for executing its civil support mission. Most notably, NORTHCOM and PACOM now have the responsibility for executing the civil support mission within their areas of responsibility, something not accounted for in the earlier guidance. Furthermore, DOD Directive 3025.16, DOD’s guidance for the Emergency Preparedness Liaison Officer program, has not been updated since 2000— about 2 years prior to the establishment of NORTHCOM. Since DOD has permanently assigned the Defense Coordinating Officers to the FEMA regions, their roles and responsibilities for the civil support mission have expanded, yet the existing guidance does not reflect their additional responsibilities. For example, DOD guidance defines the roles and responsibilities of the Defense Coordinating Officers only after they have been activated—even though Defense Coordinating Officers perform many activities prior to being activated, in an effort to assist NORTHCOM in anticipating civil support requirements. These activities may include establishing liaison among military, state, and other federal agencies; coordinating with service officials regarding the potential use of military service installations and bases for civil support operations; participating in federal, regional, state, and local disaster exercises, planning workshops, and conferences; and providing disaster planning expertise to civil authorities. In addition, according to a Defense Coordinating Officer we interviewed, the Defense Coordinating Officers will routinely provide assistance to civil authorities prior to being officially activated when it appears that a disaster declaration may be imminent. Further, DOD lacks guidance on how the Defense Coordinating Officers are to work with the Emergency Preparedness Liaison Officers for the civil support mission. DOD’s Joint Staff Defense Support of Civil Authorities Standing Execute Order identifies the Emergency Preparedness Liaison Officers as military service assets that may be activated by the military service Secretaries in response to a disaster. It also states that the Defense Coordinating Officer has tactical control of the Emergency Preparedness Liaison Officers requested by NORTHCOM. According to a NORTHCOM official, this operational framework is improvised as needed, and has not been included in any other DOD guidance. The command relationship between Defense Coordinating Officers and Emergency Preparedness Liaison Officers is therefore not clearly understood throughout the DOD organizations responsible for planning and executing civil support missions. These gaps in guidance that we have identified may limit the ability of the Defense Coordinating Officers and Emergency Preparedness Liaison Officers to fully and effectively coordinate and provide DOD capabilities to civil authorities. For example, according to several Defense Coordinating Officers we interviewed, service officials, and a DOD Inspector General September 2008 report, in some instances the military services have not been willing to activate their Emergency Preparedness Liaison Officers to participate in training and exercises with the Defense Coordinating Officers. Further, some military service officials told us that their Emergency Preparedness Liaison Officers are required to meet training and exercise requirements established by their military services, and these requirements can sometimes conflict with the training and exercise requirements identified by the Defense Coordinating Officers. DOD officials also told us that there has been friction and confusion between the military services and the Defense Coordinating Officers regarding the proper employment of the Emergency Preparedness Liaison Officers. For example, military service officials told us that Defense Coordinating Officers have attempted to exert command and control over their military service Emergency Preparedness Liaison Officers before they were officially activated. Although Defense Coordinating Officers and NORTHCOM officials said that the Defense Coordinating Officer and Emergency Preparedness Liaison Officer relationship has been generally cooperative, they noted that Emergency Preparedness Liaison Officers on occasion have not provided assistance when requested by the Defense Coordinating Officers. DOD officials told us that the command and control relationship between the Defense Coordinating Officers—who are nearly all Army personnel—and the Army’s Emergency Preparedness Liaison Officers is clearer, resulting in less friction. This is because the Army has delegated operational control over the Army Emergency Preparedness Liaison Officers to the Defense Coordinating Officers on a day-to-day basis. However, the other military services have not done so; prior to activation for an event or exercise, the Defense Coordinating Officers have only coordinating relationships with the Emergency Preparedness Liaison Officers from the other services. Figure 3 shows an organizational chart of the Defense Coordinating Officer and Emergency Preparedness Liaison Officer team. The command and control and coordination challenges we have described exist because the Emergency Preparedness Liaison Officers are under the operational command and control of their respective military services, while the Defense Coordinating Officers remain under the operational command and control of the combatant commands—NORTHCOM and PACOM. A 2008 report by the DOD Inspector General highlighted inefficiencies regarding coordination in DOD disaster training and exercises due, in part, to a lack of Emergency Preparedness Liaison Officer participation, and recommended that NORTHCOM determine whether the DOD 3025 series of directives provides adequate authority to Defense Coordinating Officers to ensure that DOD maintains an adequately trained and exercised Emergency Preparedness Liaison Officer program. In recognition of their critical role in planning, coordinating, and executing DOD’s civil support mission, NORTHCOM has attempted to establish standard requirements for the Emergency Preparedness Liaison Officers in the following seven general areas: organization and structure; roles and responsibilities; qualification, selection, and administration; equipping and resourcing; training and professional development; operations and command and control; and reporting. However, the military services have opposed this NORTHCOM initiative, on the grounds that their Emergency Preparedness Liaison Officers have additional duties to their respective services aside from assisting the Defense Coordinating Officers. NORTHCOM officials maintain their view that, because of the lack of consistency in the military services’ training and equipment requirements for their Emergency Preparedness Liaison Officers, it cannot be determined whether these personnel are adequately trained and equipped to perform the civil support mission. Without updated and clear guidance on the roles and responsibilities of the Defense Coordinating Officers and the Emergency Preparedness Liaison Officers, friction and confusion between DOD commands and the services is likely to continue and potentially hamper the effectiveness of DOD’s civil support mission planning and preparedness. The size and composition of the Defense Coordinating Officer program is not based on a staffing needs assessment and therefore does not necessarily reflect the unique characteristics or disaster needs of the several FEMA regions. Disasters such as hurricanes, wildfires, and flooding occur in some regions more often than others. For instance, in 3 fiscal years of 2007 through 2009 there were only five disaster declarations throughout FEMA Region III, while there were 97 disaster declarations in Region VI. These events in Region VI represented nearly 25 percent of all disaster declarations nationwide for those 3 years. Figures 4 and 5 illustrate the combined relative risk of earthquakes and hurricanes across the United States. As figures 4 and 5 show, different FEMA regions are prone to different disasters, with some regions facing greater risk of catastrophic disasters than others; therefore they may require different levels of personnel and types of expertise from DOD both in preparing for and responding to natural disasters. For example, one of the Defense Coordinating Officers told us that he could use more specialists, particularly in logistics and aviation. D.C. U.S. Although DOD recognizes that its civil support mission requires a joint effort from all the military services, its Defense Coordinating Officer program continues to be staffed only by Army personnel, except for PACOM’s Navy Defense Coordinating Officer in Guam. Several DOD officials told us that the Defense Coordinating Officer program should be more reflective of the multiservice environment in which it operates. However, as we have noted above, there is a lack of DOD guidance that delineates the roles and responsibilities of the Defense Coordinating Officers prior to their activation, including how they are to coordinate with the military services’ Emergency Preparedness Liaison Officers with emergency preparedness activities. A September 2008 DOD Inspector General report found that NORTHCOM has not obtained an equal and adequate level of effort from all the military services to jointly establish the Defense Coordinating Officer program, and recommended that the Chairman of the Joint Chiefs of Staff develop an implementation plan to migrate the staffing of Defense Coordinating Officer positions from the Army to all the military services and other DOD components, as appropriate. The Chairman of the Joint Chiefs of Staff concurred with the recommendation, and the Joint Chiefs plan to implement actions to address the recommendation by fiscal year 2010. A NORTHCOM official acknowledged to us that a jointly staffed Defense Coordinating Officer program would be a good idea, and said that NORTHCOM has discussed the proposal with the military services. The DOD Homeland Defense and Civil Support Joint Operating Concept states that civil support operations are inherently joint endeavors, and that changes in DOD concepts, policies, authorities, and organizations may be required to ensure an effective and integrated DOD response. Although DOD has improved its support of civil authorities through improvements in the Defense Coordinating Officer program, its outdated, inconsistent, and unclear guidance on roles, responsibilities, and command and control relationships; and lack of a staffing needs assessment increase the risk that DOD may not be appropriately staffed to meet the varying needs of the FEMA regions, thus potentially limiting its ability to provide an optimally coordinated response to civil authorities with appropriate multiservice capabilities. The National Response Framework broadly calls for DOD and other federal agencies to respond to requests-for-assistance from state and local civilian authorities, and DOD follows an internal process to respond to these requests-for-assistance when both state and other federal civilian resources have been exhausted or are unavailable. How DOD handles these requests-for-assistance depends on various factors, such as whether the request is a Stafford Act or non-Stafford Act request; how much time has elapsed since the incident occurred; and the identity of the originator of the request. DOD’s Joint Publication 3-28, Civil Support, lays out the department’s internal process for reviewing and sourcing—that is, providing military resources—for requests-for-assistance from other federal agencies. The process by which the requests-for-assistance are conducted is complex. The primary federal agency—usually FEMA, working in conjunction with the Defense Coordinating Officer and Defense Coordinating Element—will initiate the request-for-assistance. To validate the request, according to Joint Publication 3-28, the Defense Coordinating Officer should ensure that it is readily understandable and clearly describes the requirement or capability that is needed. If the Defense Coordinating Officer finds that the request-for-assistance calls for a specific asset rather than a capability, the response process will be lengthened as the officer and staff coordinate with the requesting agency to revise the request language. Further, the Defense Coordinating Officer/Element must evaluate all requests based on the six criteria established in DOD’s Joint Publication 3-28, which are applied at all levels of DOD review. These criteria are as follows: Cost: Who pays, and what is the effect on the DOD budget? Appropriateness: Is the requested mission in the interest of DOD to conduct? Who normally performs this mission, and who may be better suited to fill the request? Readiness: How does the request affect DOD’s primary warfighting mission? Risk: Does it place DOD’s forces in harm’s way? Legality: Is the request in compliance with laws and Presidential directives? Lethality: Is the potential use of force by or against DOD forces expected? The internal DOD request-for-assistance review and sourcing process is presented below in figure 6. This process takes place after local, state, and federal capabilities are exhausted or otherwise unavailable as shown in the National Response Framework in figure 1. After the Defense Coordinating Officer validates the request-for- assistance, it is simultaneously forwarded, along with the Defense Coordinating Officer’s recommendation for action, to NORTHCOM’s Operations Center. The Joint Directors of Military Support at the Joint Staff is copied on the request so it can initiate parallel coordination and planning efforts. At this point, NORTHCOM then coordinates with the appropriate supporting service commands, force provider, the National Guard Bureau, or any other federal or DOD stakeholder, depending on the nature of the incident and the requested capability. Once NORTHCOM reviews and approves the request, it goes to the Joint Directors for Military Support for approval before being sent up to the Chairman of the Joint Chiefs of Staff and the Assistant Secretary of Defense for Homeland Defense and Americas’ Security Affairs for policy review. Finally, the request-for-assistance is forwarded to the Secretary of Defense for his approval. Upon approval by the Secretary of Defense, the Joint Directors of Military Support will issue an Execute Order to designate a command structure and to task the appropriate commands, services, and DOD agencies to provide support. NORTHCOM and DOD have developed two methods to expedite the request-for-assistance review and sourcing process. First, NORTHCOM has worked with FEMA and DOD officials to develop prescripted mission assignments, which are descriptions of a set of the capabilities civil authorities might need from DOD. The prescripted mission assignments are developed so as to provide a common understanding of a capability, and they also serve as a template for drafting mission assignments. Most of the Defense Coordinating Officers told us that they use the prescripted mission assignments to a great extent to execute their civil support mission. For example, several of the Defense Coordinating Officers found the prescripted mission assignments useful for outlining cost information or language as they prepared to write mission assignments. However, one Defense Coordinating Officer said their usefulness for expediting requests-for-assistance is limited, because the requests still have to go through the regular process. Second, the Joint Chiefs of Staff have developed the Defense Support of Civil Authorities Standing Execute Order, which pre-identifies forces that a supported combatant commander may use based upon historical requests for DOD assistance. Many of the Defense Coordinating Officers said this order is an important piece of guidance, because it identifies the DOD capabilities that are most readily available to assist civil authorities during an event. However, if the requested item is not listed in the Execute Order, the request must be channeled through the standard internal DOD request-for-assistance process, so it will take longer. According to DOD officials, in some emergency cases, DOD allows for the immediate activation of certain assets on vocal orders from the Secretary of Defense, with the regular process to be performed later. While DOD has developed a process to respond to requests-for-assistance and has published portions of a description of its internal process as part of an annex to the National Response Framework, the lead civilian authorities may not be fully aware of the details or length of this process. For example, service and Defense Coordinating Element officials told us that their biggest challenge is responding to incidents when civil authorities request assistance too late for DOD to respond due to unrealistic expectations about DOD response times. An official at NORTHCOM concurred, saying that the only situations in which NORTHCOM cannot respond are those for which the request comes too late. Further, FEMA’s Liaison Officer to NORTHCOM acknowledged that FEMA officials do not recognize how lengthy the DOD review and sourcing process is. According to several service and Defense Coordinating Element officials, civil authorities have the perception that DOD can respond immediately to a request; they do not realize that it takes time to identify, activate, and deploy military units in response to a request-for-assistance. This perception can be especially dangerous when aeromedical evacuation of patients is needed in advance of a hurricane’s landfall. These patients have special medical needs, and a crew of specially trained nurses and physicians must be assembled to care for them. A U.S. Transportation Command official told us that requests for aeromedical evacuation assistance must be made early, as it takes at least 72 hours to activate the personnel with the skills needed to execute this mission. Further, this official stated that these crews can safely operate no later than 18 hours before a hurricane makes landfall. DOD’s capabilities- based assessment for homeland defense and civil support identified the response timeliness of DOD transportation support—including aeromedical evacuation—as a capability shortfall. The assessment noted that although civil authorities have identified a need for DOD transportation support within 24 hours of a catastrophic incident, DOD has limited capability to respond sooner than 72 hours after the incident. A NORTHCOM official suggested that educating state decision makers (i.e. governors and state emergency management officials) about DOD’s response times and processes may help expedite their disaster declaration process so that NORTHCOM can respond before it is too late to do so. Without shared, comprehensive guidance outlining DOD’s internal review and sourcing process, state and federal decision makers may overestimate the speed of DOD’s response and therefore not request assistance in a timely manner. DOD could help to mitigate this issue by incorporating its internal processes for responding to requests-for-assistance in the partner guide that we recommended in a recent report. Doing so would provide DOD’s interagency partners with information on the complexity of its internal review and sourcing process for civilian requests-for-assistance. While DOD has developed a Web-based system to track incoming requests- for-assistance from civilian authorities, this system is not comprehensive and is not accessible to all of DOD’s interagency partners. During Hurricane Katrina, DOD was unable to efficiently manage or track a large number of requests-for-assistance. Following Hurricane Katrina, U.S. Army North developed the DOD Defense Support of Civil Authorities Automated Support System (the tracking system) to monitor the approval, sourcing, cost, and progress of requests-for-assistance from FEMA. NORTHCOM approved the tracking system in March 2007. According to a 2008 DOD Inspector General Report, the tracking system should enable DOD users to monitor the approval, sourcing, and progress of civilian requests-for-assistance. Some Defense Coordinating Officers, Defense Coordinating Elements, and service officials agree about the need for a tracking system, and others recognize benefits provided by the current system. However, we have identified gaps in the tracking system’s ability to maintain a common operational picture and provide real-time situational awareness. Furthermore, the current system is not an official DOD program to track civilian requests-for-assistance. Its use is voluntary; there are no requirements mandating that requests-for-assistance and associated information be entered into the system. DOD officials indicated to us that the system is available to all DOD components and interagency partners who request and are granted access. While PACOM and NORTHCOM have agreed to use the system and they require their components to use it, the DOD force providers—Joint Forces Command and its components (such as Air Combat Command and Marine Forces Command)—are not utilizing the sourcing section of the system. Instead, Joint Forces Command and its components use classified systems, like Global Force Management and the Joint Capability Requirements Manager, to resource their civil support requirements. Those systems are not compatible with the unclassified tracking system. When asked about the DOD Defense Support of Civil Authorities Automated Support System, officials at Air Combat Command told us that they were unaware of its existence. Additionally, service and Defense Coordinating Element officials noted that information is not always entered into the system accurately, thus limiting the system’s utility. Further, a Defense Coordinating Officer told us that the architects of the current system did not ask civil support stakeholders what they thought should be included in a request-for-assistance tracking system. During the course of our audit work, we found that other DOD information technology systems have the potential to enhance situational awareness and provide a common operating picture for both DOD and the civilian authorities it is assisting. For example, Air Force North has developed the unclassified Defense Support for Civil Authorities Collaboration Suite for its Emergency Preparedness Liaison Officers. While this Air Force system can perform all of the same functions as the current unclassified tracking system, it ties in additional features to provide a single information collaboration system, such as a section noting available capabilities at each Base Support Installation; all state emergency management points of contact; anticipated requests-for-assistance based upon lessons learned and historical requests; a Google Earth section that maps weather and the locations of Air Force bases; and a section showing “shared situational awareness,” including threat assessments and continuous updates of current operations. Similarly, PACOM’s Joint Task Force-Homeland Defense has leveraged the All Hazards Decision Support System, an unclassified system developed by the Pacific Disaster Center. This system uses geospatial mapping and modeling capabilities to identify locations and critical areas of vulnerability for potential disasters. In addition, the Pacific Disaster Center’s system provides a common operating picture by allowing interoperability among agencies, and it is accessible to all stakeholders in the disaster-management community. Further, despite recommendations in the April 2009 DOD Information Sharing Implementation Plan regarding the establishment of authentication and access standards across unclassified systems to allow DOD and its external mission partners to achieve an appropriate level of access to information concerning civil support operations, the DOD Defense Support of Civil Authorities Automated Support System does not provide a common operating picture for DOD and the lead civilian agencies. That is because the system is an internal NORTHCOM system and not a DOD-wide program, and attempts to link the system with those in other agencies, such as FEMA, have been unsuccessful in terms of interoperability. Therefore, FEMA and the other lead federal agencies, such as the U.S. Secret Service, do not necessarily have visibility into the system. According to FEMA officials, that lack of visibility constitutes a major shortfall in FEMA’s ability to see the status of its requests. Finally, although there should be situational awareness among DOD and its interagency partners, DOD has acknowledged in its homeland defense and civil support capabilities-based assessment that such situational awareness is lacking. DOD’s Defense Support of Civil Authorities Automated Support System is not comprehensive; it includes only those requests-for-assistance issued to DOD by FEMA and the National Interagency Fire Center. The system does not include all requests issued by the other federal agencies that have lead roles in specific cases. For example, the DOD Defense Support of Civil Authorities Automated Support System did not include requests-for-assistance from the U.S. Secret Service—the lead agency for pre-planned National Special Security Events—for the annual United Nations General Assembly, the 2008 Presidential Nominating Conventions, or the 2009 G-20 Summit. In September 2009, DOD was tasked to provide air support, bomb detection, search and rescue, and medical assistance to support the Secret Service for the G-20 Summit held in Pittsburgh, Pennsylvania. However, the current tracking system contained no record of this request. Without a comprehensive, unclassified system that tracks requests-for- assistance from, and is shared with, all of DOD’s interagency partners, gaps will remain in gaining real-time situational awareness and in maintaining a common operational picture of DOD’s assistance for all participants involved in disaster-response missions. DOD, through both NORTHCOM and PACOM, has taken concrete steps to develop and enhance its defense support of civil authorities mission in such ways as conducting an assessment of the DOD capabilities needed to assist civil authorities and designating full-time personnel to coordinate with federal, state, territorial, tribal, and local civil authorities. These efforts improve DOD’s overall ability to assist federal, state, and local authorities in the shared responsibility of responding to natural disasters in the United States. But this improvement has been limited by outdated and inconsistent DOD policies, guidance, and doctrine pertaining to the civil support mission. Unless and until these issues are addressed, challenges will remain in the ability of DOD commands and personnel, specifically the Defense Coordinating Officers and their staffs, to provide the support requested by civil authorities during disasters. Without clear roles, responsibilities, effective command and control structures, shared guidance, and an assessment of DOD staffing needs in the FEMA regions, DOD will be missing an opportunity to further enhance its ability to support civil authorities with the kind of coordinated and integrated civilian and military response to disasters that is intended by the National Response Framework. While DOD can address policy and guidance issues, there are obstacles over which it has no control, such as a statutory restriction on DOD’s authority to order Reserve personnel to involuntary active duty service for catastrophic disaster relief, which we raised as matter for congressional consideration in 1993 and again in 2006. We continue to believe that this statutory restriction impedes DOD’s ability to respond to and assist civilians during catastrophic natural disasters. To some degree, DOD will always face challenges and risks in this mission area because it has to be prepared for a wide variety of incidents that can range from a regional flood to a catastrophic tsunami or hurricane, while maintaining focus on its warfighting mission. However, DOD can make further improvements to mitigate these challenges and facilitate and strengthen its relationships with federal, state, territorial, tribal, and local civil authorities. To improve DOD’s ability to conduct its civil support missions, we recommend that the Secretary of Defense take the following five actions: Direct the Assistant Secretary of Defense for Homeland Defense and Americas’ Security Affairs to update DOD policy and guidance for civil support (i.e., DOD directive and instruction 3025 series) to reflect current doctrine, terminology, funding policy, practices, and DOD’s organizational framework for providing civil support, to include clarifying NORTHCOM and PACOM roles and responsibilities for civil support missions; and establish time frames for completion. Direct the Assistant Secretary of Defense for Homeland Defense and Americas’ Security Affairs, in coordination with the Chairman of the Joint Chiefs of Staff, to: clarify roles and responsibilities, including command and control relationships for the Defense Coordinating Officers, Defense Coordinating Elements, and Emergency Preparedness Liaison Officers; identify the extent to which NORTHCOM and PACOM should set training and equipping requirements for the Defense Coordinating Officers, Defense Coordinating Elements, and Emergency Preparedness Liaison Officers; and conduct a review of staffing requirements for the Defense Coordinating Officers, Defense Coordinating Elements, and Emergency Preparedness Liaison Officers in both the NORTHCOM and PACOM areas of responsibility that includes but is not limited to an assessment of staff size, subject-matter expertise, and military service composition by FEMA region. Direct the Joint Staff in coordination with the Assistant Secretary of Defense for Networks and Information Integration / Chief Information Officer to identify and establish an official, DOD-wide, unclassified tracking system for all incoming requests-for-assistance from federal agencies regarding civil support missions. This system should at a minimum include: requirements and guidance to ensure that the system is comprehensive and captures request-for-assistance data that can be used to anticipate civil support requirements; access for FEMA and other lead federal agencies, to provide them with real-time situational awareness; and time frames for the system’s development and implementation. In comments on a draft of this report, DOD agreed with our recommendations and discussed some of the steps it is taking and planning to take to address these recommendations. DOD also provided technical comments, which we have incorporated into the report where appropriate. DHS and FEMA did not provide comments on this report. In response to our recommendation that DOD clarify roles and responsibilities, including command and control relationships, and identify the extent to which NORTHCOM and PACOM should set training and equipping requirements for the Defense Coordinating Officers, Defense Coordinating Elements, and Emergency Preparedness Liaison Officers, DOD said that new guidance is in coordination to describe roles and responsibilities for DOD entities for homeland defense and civil support. Further, DOD said that NORTHCOM is reviewing the staffing, training, and equipment requirements for the Defense Coordinating Elements in each FEMA region. However, it was unclear from DOD’s comments whether and how the Emergency Preparedness Liaison Officers’ roles, responsibilities, training and equipment requirements will be addressed in the new issuance or in the NORTHCOM review. We continue to believe the inclusion of the Emergency Preparedness Liaison Officers in these efforts is important to enhance DOD’s ability to support civil authorities with the kind of coordinated and integrated civilian and military response to disasters that is intended by the National Response Framework. As arranged with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its date. At that time, we will send copies to the appropriate congressional committees, the Secretary of Defense, the Secretary of Homeland Security, and other interested parties. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-5431 or [email protected]. Contacts points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. To address the extent to which the Department of Defense (DOD) (1) has identified and addressed its capability gaps for its civil support mission, (2) has clearly defined roles, responsibilities, and relationships and identified appropriate levels and types of personnel to assign to the FEMA regions, and (3) shares and tracks information concerning its civil support requirements response process with civil authorities, we reviewed and analyzed available DOD, U.S. Northern Command (NORTHCOM), and U.S. Pacific Command (PACOM) civil support guidance and 4 of the 20 civil support operational plans, as well as DOD’s March 2009 Homeland Defense and Civil Support Capabilities-Based Assessment. To address all of our objectives, we compared the DOD civil support guidance and policies currently in place to the relevant DOD doctrine, which, when compared with anecdotal evidence provided by DOD and civilian officials, allowed us to identify the various policy and guidance issues raised in the report and their associated operational effects. To examine the extent to which DOD has identified and addressed its civil support capability gaps, we reviewed DOD’s March 2009 Homeland Defense and Civil Support Capabilities-Based Assessment and held discussions with NORTHCOM and other DOD officials about how the assessment was conducted, how NORTHCOM identified relevant capabilities, and how NORTHCOM and DOD plan to use the assessment in the future. We met with knowledgeable officials across a range of DOD offices and commands, as illustrated in table 2. At these meetings, we held discussions about the work and analysis that DOD has conducted in order to understand what forms of support civilian authorities may ask the department to provide during a catastrophic incident. We also held discussions with these officials about the policies and guidance that exist to provide structure to DOD’s civil support mission set. Further, officials in these offices provided us with information on the day-to-day roles and responsibilities that are a part of the civil support mission as they work to prepare to support civil authorities with a wide range of potential disasters. We met with FEMA officials at both the national and regional levels to understand how they work with DOD both in identifying capability gaps during planning stages and how they channel state and federal requests- for-assistance to DOD during an actual incident. They discussed with us the evolution of the FEMA-DOD relationship, as well as relationships between DOD officials and state and local civil authorities. Table 3 shows the federal civilian offices and agencies with whom we met. In the course of our audit work we visited four FEMA regions (FEMA regions III, IV, VII, and IX) that were selected because they deal with a range of National Special Security Events such as the Olympics, political conventions, and the Super Bowl, as well as a variety of natural disasters including hurricanes, earthquakes, wildland fires, and floods. During our visits to these FEMA regions we not only met with FEMA officials, but with the Defense Coordinating Officers and their staff in those regions to discuss their role as DOD’s representatives to FEMA, other civilian authorities, and other military officials (including the National Guard) in their assigned states and regions. They provided us with anecdotal and documentary evidence on their roles, responsibilities, and relationships in their respective regions. When they were available, we also met with some of the Emergency Preparedness Liaison Officers, who are military service representatives. Specifically, we met with an Army Emergency Preparedness Liaison Officer in Region III, one Emergency Preparedness Liaison Officer from the Army, one from the Air Force, and one from the Navy in Region IV, and one Emergency Preparedness Liaison Officer from each of the four services in Region IX. Subsequent to our meetings with DOD, FEMA, and other federal civilian officials, we reviewed the guidance, policies, and other documentation we obtained from them and compared it with the anecdotal information that those officials shared with us during our meetings in support of all of our objectives. We noted discrepancies and areas of concern, then followed up with military and civilian officials as appropriate. Additionally, we reviewed previous GAO and DOD Inspector General reports to identify what, if any, progress and changes had occurred in the area of defense support of civil authorities over the last several years, specifically since Hurricane Katrina in 2005. Following our visits to Defense Coordinating Officers in four of the FEMA regions, we decided to contact the Defense Coordinating Officers in all 10 FEMA regions to obtain a nationwide perspective of our objectives. In order to obtain detailed information about the extent to which DOD has identified and addressed its capability gaps for its civil support mission; identified and defined roles, responsibilities, and relationships of personnel assigned to the FEMA regions; and shares and tracks information concerning its civil support requirements response process with civil authorities, we developed a structured questionnaire and sent it to all 12 Defense Coordinating Officers assigned to the PACOM and NORTHCOM areas of responsibility. The questionnaire included a variety of questions, covering issues ranging from the guidance the Defense Coordinating Officers use to execute their civil support mission to the methods and mediums (such as regional exercises or planning conferences) they use to identify capability gaps in their region. The questionnaire also asked what challenges, if any, the Defense Coordinating Officers face when anticipating and responding to requests-for-assistance and in identifying capability gaps at both the federal and state levels. Since we intended to survey the universe of Defense Coordinating Officers at PACOM and NORTHCOM, our survey was not a sample survey and therefore had no sampling errors. However, the practical difficulties of conducting any survey may introduce other types of errors, commonly referred to as nonsampling errors. For example, difficulties in interpreting a particular question, sources of information available to respondents, or entering data into a database or analyzing them can introduce unwanted variability into the survey results. We took steps in developing the questionnaire, collecting the data, and analyzing them to minimize such nonsampling errors. For example, a social science survey methodologist helped design the questionnaire in collaboration with GAO staff that had subject-matter expertise. The questionnaire was also reviewed by an independent GAO survey specialist. The survey asked a combination of questions that allowed for open-ended and close-ended responses. We pretested the content and format of the questionnaire with two Defense Coordinating Officers to ensure that the questions were relevant, clearly stated, and easy to understand. During the pretests, we asked questions to determine whether (1) the survey questions were clear, (2) the terms we used were precise, (3) the questionnaire did not place an undue burden on the respondents, and (4) the questions were unbiased. We received input on the survey and made changes to the content and format of the final questionnaire based on our pretest results. Since there were relatively few changes based on the pretests and we were conducting surveys with the universe of respondents—all PACOM and NORTHCOM Defense Coordinating Officers—we did not find it necessary to conduct additional pretests. Data analysis was conducted by a GAO data analyst working directly with GAO staff with subject-matter expertise. A second independent analyst checked all of the computer programs for accuracy. Following this extensive work on developing a questionnaire to collect data in a standardized and structured manner, we sent the questionnaire by e-mail on October 8, 2009, in an attached Microsoft Word form that respondents could return electronically after marking checkboxes or entering narrative responses into open-answer boxes. Alternatively, respondents could return the survey by mail after printing the form and completing it by hand. Both PACOM Defense Coordinating Officers returned the completed surveys to GAO electronically. However, NORTHCOM Defense Coordinating Officers were told by their command leadership not to send the completed surveys to GAO, but instead route them through the NORTHCOM headquarters Inspector General. Since this position posed both considerable methodological problems for the integrity of the data we wanted to analyze and would not allow for anonymity and transparency in responses, we instead elected to conduct structured interviews with all 10 NORTHCOM Defense Coordinating Officers individually over the phone using the same questionnaire to promote candid discussions that may not have been obtained through a NORTHCOM screening process. We combined the information gathered from the telephonic interviews and analyzed the frequency and distribution of marked checkbox responses. We also analyzed the open-ended narrative responses for trends and recurring themes. For instance, although we did not directly ask about the extent to which personnel coordinating DOD’s civil support mission are joint, several Defense Coordinating Officers said that the Defense Coordinating Officer and Emergency Preparedness Liaison Officer programs were not joint and that this made their work more challenging than it needed to be. When the Defense Coordinating Officers were not in agreement or had different perspectives on issues, we summarized conflicting responses to illustrate the complexity of the Defense Coordinating Officers’ mission and the unique challenges found in each region. For example, some Defense Coordinating Officers told us they were sufficiently staffed with their current personnel, when others said they badly need more staff to assist them with their mission and to engage with the states within their regions. We compiled this information and used it in conjunction with the interviews from the four FEMA region visits, our meetings with DOD and FEMA officials, and our review of documents and guidance to identify areas for improvement in DOD’s ability to provide support to civil authorities and respond to requests-for- assistance. We conducted this performance audit from January 2009 to March 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, William O. Jenkins, Jr., Director, Homeland Security and Justice; Lorelei St. James, Acting Director; Joseph W. Kirschbaum, Assistant Director; Nicholas Benne; Grace Coleman; Michael Hanson; David Lysy; Lonnie J. McAllister; Eric E. Petersen; Terry Richardson; Bethann E. Ritter; Wesley Sholtes; Cheryl Weissman; and Jena Whitley made key contributions to this report.
In addition to its primary mission of warfighting, the Department of Defense (DOD) plays an important role in civil support. Four years after the poorly coordinated national response to Hurricane Katrina, issues remain about DOD's progress in identifying its capability requirements for supporting a coordinated civilian-military response to a catastrophic domestic event. This report addresses the extent to which DOD (1) has identified and addressed its capability gaps for its civil support mission; (2) has clearly defined roles, responsibilities, and relationships and identified appropriate levels and types of personnel to assign to the FEMA regions; and (3) shares and tracks information concerning its civil support requirements response process with civil authorities. To do this, GAO analyzed DOD civil support guidance and plans and met with DOD and FEMA officials regarding the support that civilian authorities may request during a catastrophic incident. DOD has identified capability gaps for its civil support mission by completing a capabilities-based assessment, but key DOD policies and guidance for the civil support mission are outdated, limiting DOD's ability to fully address capability gaps. DOD's strategic guidance requires that it anticipate requests for civil support by identifying capability gaps. However, inconsistency and misalignment across DOD's policies, strategy, and doctrine for civil support make it difficult for DOD to address capability gaps and pre-position equipment and supplies. GAO found this was due to outdated key DOD policies and guidance that do not reflect DOD's current organizational framework for providing assistance to civil authorities. If DOD updates key policies for civil support, it will be better able to address capability gaps and provide timely and appropriate support to civil authorities. DOD has increased its personnel dedicated to coordinate civilian requests for assistance, but it has not clearly defined their roles, responsibilities, and relationships, and its staffing is not based upon a staffing assessment by FEMA region. DOD guidance calls for coordination with federal and state authorities on military capabilities for civil support. However, while the Defense Coordinating Officer program has improved civil authorities' overall awareness of DOD's capabilities, roles, and responsibilities, command and control and coordination among the Defense Coordinating Officers and the military services' liaison officers have been confusing and sometimes problematic because DOD's civil support guidance is outdated. Further, DOD officials noted that staffing of the Defense Coordinating Officer program should reflect its multiservice environment and the unique challenges of each FEMA region. Different FEMA regions are prone to different disasters and have varying needs for DOD support, but the size and composition of the Defense Coordinating Officers' staff--nearly all from the Army--were not based on a staffing needs assessment. Therefore, they do not necessarily reflect variations in the support needs of the regions. As a result, DOD may be missing an opportunity to optimize its ability to provide a coordinated response to civil authorities with appropriate multiservice capabilities. While DOD follows established processes in responding to requests for assistance from civil authorities, it has not established a system to track civilian requests that is accessible to DOD's interagency partners. The National Response Framework broadly identifies how DOD responds to requests for assistance, and DOD guidance further specifies DOD's processes. However, civil authorities are not fully aware of the length of this process. While DOD has several different tracking systems in use by different DOD components for the civil support mission, it lacks a formal, interoperable, and unclassified system for tracking all requests for assistance across DOD. Without such a system, gaps will remain in gaining real-time situational awareness and maintaining a common operational picture of DOD support for all federal partners in disaster-response missions including DOD.
The radio-frequency spectrum is the part of the natural spectrum of electromagnetic radiation lying between the frequency limits of 9 kilohertz and 300 gigahertz. It is the medium that makes possible wireless communications and supports a vast array of commercial and governmental services. Commercial entities use spectrum to provide a variety of wireless services, including mobile voice and data, paging, broadcast television and radio, and satellite services. Additionally, some companies use spectrum for private tasks, such as communicating with remote vehicles. Federal, state, and local agencies also use spectrum to fulfill a variety of government missions. For example, state and local police departments, fire departments, and other emergency services agencies use spectrum to transmit and receive critical voice and data communications, and federal agencies use spectrum for varied mission needs such as national defense, law enforcement, weather services, and aviation communication. Spectrum is managed at the international and national levels. The International Telecommunication Union (ITU), a specialized agency of the United Nations, coordinates spectrum management decisions among nations. Spectrum management decisions generally require international coordination, since radio waves can cross national borders. Once spectrum management decisions are made at the ITU, regulators within each nation, to varying degrees, will follow the ITU decisions. In the United States, responsibility for spectrum management is divided between two agencies: FCC and NTIA. FCC manages spectrum use for nonfederal users, including commercial, private, and state and local government users under authority provided in the Communications Act. NTIA manages spectrum for federal government users and acts for the President with respect to spectrum management issues. FCC and NTIA, with direction from the Congress, jointly determine the amount of spectrum allocated to federal and nonfederal users, including the amount allocated to shared use. Figure 1 shows the current allocation of spectrum between federal and nonfederal users. Historically, concern about interference or crowding among users has been a driving force in the management of spectrum. FCC and NTIA work to minimize interference through two primary spectrum management functions—the “allocation” and the “assignment” of radio spectrum. Specifically: Allocation involves segmenting the radio spectrum into bands of frequencies that are designated for use by particular types of radio services or classes of users. For example, the frequency bands between 88 and 108 megahertz (MHz) are allocated to FM radio broadcasting in the United States. In addition to allocation, spectrum managers also specify service rules, which include the technical and operating characteristics of equipment. Assignment, which occurs after spectrum has been allocated for particular types of services or classes of users, involves providing a license or authorization to use a specific portion of spectrum to users, such as commercial entities or government agencies. FCC assigns licenses for frequency bands to commercial enterprises, state and local governments, and other entities, while NTIA makes frequency assignments to federal agencies. In some frequency bands, FCC authorizes unlicensed use of spectrum— that is, users do not need to obtain a license to use the spectrum. Rather, an unlimited number of unlicensed users can share frequencies on a non- interference basis. Thus, the assignment process does not apply to the use of unlicensed devices. However, manufacturers of unlicensed equipment must receive authorization from FCC before operating or marketing an unlicensed device. When FCC assigns a portion of spectrum to a single entity, the license is considered exclusive. When two or more entities apply for the same exclusive license, FCC classifies these as mutually exclusive applications— that is, the grant of a license to one entity would preclude the grant to one or more other entities. For mutually exclusive applications, FCC has primarily used the following three assignment mechanisms. Comparative hearings were quasi-judicial forums in which competing applicants argued why they should be awarded a license, and FCC awarded licenses based on pre-established comparative criteria. FCC principally used comparative hearings from 1934 to 1984. Critics asserted that comparative hearings were time consuming and resource intensive, lacked transparency, and often led to protracted litigation. Lotteries entailed FCC randomly selecting licensees from a pool of qualified applicants. Congress authorized FCC to use lotteries to assign mutually exclusive licenses in 1981, partially in response to the administrative burden associated with comparative hearings. FCC used lotteries from 1984 to 1993. Critics contended that lottery winners were not always the best suited to provide services; thus, several years could pass before the licenses were transferred in the secondary market to entities capable of deploying a system and effectively using the spectrum. Auctions are a market-based mechanism in which FCC assigns a license to the entity that submits the highest bid for specific bands of spectrum. The Congress provided FCC with authority to use auctions to assign mutually exclusive licenses for certain subscriber-based wireless services in the Omnibus Budget Reconciliation Act of 1993. In subsequent years, the Congress has modified and extended FCC’s auction authority, including exempting some licenses from competitive bidding, such as licenses for public safety radio services and noncommercial educational broadcast services. Critics of auctions have suggested that auctions raise consumer prices for wireless services, slow the deployment of wireless systems, and are a barrier for small businesses. As of November 30, 2005, FCC has conducted 59 auctions to select between competing applications for the same license, which have generated over $14.5 billion for the U.S. Treasury. However, only a very small portion of total licenses has been auctioned. In particular, FCC has auctioned approximately 56,100 licenses—about 2 percent of total licenses. (See fig. 2.) The other 98 percent of licenses have been assigned through other means. In recent years, two government-led task forces have examined spectrum policy in the U.S. FCC established the Spectrum Policy Task Force, comprised of FCC staff, to assist the Commission in identifying and evaluating changes in spectrum policy that would increase the public benefits derived from the use of spectrum. In November 2002, the task force released a report that contained a number of recommendations, including promoting more market-based mechanisms to allocate spectrum. The Commission subsequently implemented several of the task force’s recommendations, including developing rules for leasing spectrum. The Federal Government Spectrum Task Force, comprised of the heads of executive branch departments, agencies, and offices, examined spectrum policy for government use, including homeland security, public safety, scientific research, federal transportation infrastructure, and law enforcement. In June 2004, the Department of Commerce released two reports based on the task force’s findings, which contained a number of recommendations for reforms to federal agencies’ use of spectrum. For example, the Department of Commerce recommended adopting incentives for more efficient use of spectrum by government agencies. However, as we noted in 2003, the bifurcated responsibility between FCC and NTIA for spectrum management can hinder reform. Specifically, neither FCC nor NTIA has ultimate decision making authority over spectrum management or the authority to impose fundamental reform. Because of the lack of a single decision making point for spectrum reform, we recommended that the Congress consider establishing an independent commission that would conduct a comprehensive examination of spectrum management. To date, such a commission has not been established. Spectrum allocation remains largely a command-and-control process, although FCC is providing greater flexibility in some instances, particularly as it licenses newly available spectrum. Many stakeholders with whom we spoke and panelists on our expert panel identified a number of weaknesses with the command-and-control process. FCC staff identified two alternative spectrum management models: the exclusive, flexible rights model and the open-access, or commons, model. Under these models, users of spectrum, rather than FCC, would exert a greater influence on the use of spectrum. Although there is limited consensus about fully adopting either alternative model in the future, many stakeholders and members of our expert panel, as well as the Spectrum Policy Task Force, support balanced approaches that would combine elements of all three models. FCC currently employs largely a command-and-control process for spectrum allocation. That is, FCC applies regulatory judgments to determine and limit what types of services—such as broadcast, satellite, or mobile radio—will be offered in different frequency bands by geographic area. In addition, for most frequency bands FCC allocates, the agency issues service rules to define the terms and conditions for spectrum use within the given bands. These rules typically specify eligibility standards as well as limitations on the services that relevant entities may offer and the technologies and power levels they may use. These decisions can constrain users’ ability to offer services and equipment of their choosing. FCC has provided greater operational and technical flexibility within certain frequency bands. For example, FCC’s rules for Commercial Mobile Radio Service (CMRS), which include cellular and PCS services, are considered less restrictive. Under these rules, wireless telephony operators are free to select technologies, services, and business models of their choosing. In contrast, spectrum users have relatively little latitude for making such choices in frequency bands allocated for broadcast television services. Despite these efforts, many industry stakeholders and experts with whom we spoke cited a number of weaknesses in the command-and-control process for spectrum allocation. The most frequently cited weakness by our expert panel was the slowness of the allocation process. Because of the regulatory nature of the command-and-control process, arriving at allocation decisions can be a protracted process. The slow moving allocation process delays consumers’ access to new technologies. In addition, some panelists noted that the current allocation process leads to underutilization of spectrum. For example, a recent study found that during a four-day period in New York City, only 13 percent of spectrum between 30 MHz and 2.9 GHz was occupied at one time or another. Another weakness cited by a number of stakeholders was that the command-and-control process does not systematically allocate spectrum to its highest value uses. As a result, highly valued services may not be fully deployed. The Spectrum Policy Task Force Report, a document produced by FCC staff, identified two alternative spectrum management models to the command-and-control model: the exclusive, flexible rights model, and the open-access model. The exclusive, flexible rights model extends the existing license-based allocation process by providing greater flexibility to license holders. The open-access model allows an unlimited number of unlicensed users to share frequencies, with usage rights governed by technical standards. Both models allow flexible use of spectrum, so that users of spectrum, rather than FCC, play a larger role in determining how spectrum is ultimately used. FCC’s Spectrum Policy Task Force recommended a balanced approach to allocation—utilizing aspects of the command-and-control; exclusive, flexible rights; and open-access models. The exclusive, flexible rights model provides licensees with exclusive, flexible use of the spectrum and transferable rights within defined geographic areas. This is a licensed-based approach to spectrum management that extends the existing allocation process by providing greater flexibility regarding the use of spectrum and the ability to transfer licenses or to lease spectrum usage rights. Licensees with exclusive licenses can exclude others from using the spectrum they have been assigned, and with flexible rights they enjoy flexibility to provide the services they wish with their licenses, provided they comply with applicable FCC rules and policies. To a certain extent, the model treats spectrum like real estate, and some have suggested moving far in this direction by turning spectrum licenses into full property rights—an option that existing legislation currently prohibits. FCC’s broadband PCS rules closely resemble this model, in that they provide substantial flexibility to licensees in terms of technology and use of spectrum. Proponents cite several advantages with the exclusive, flexible rights model. First, proponents argue that this model would promote the economically efficient use of spectrum. For example, advocates typically point to CMRS to support this argument, as CMRS licenses are exclusive and governed by relatively flexible rules; in addition, the market for CMRS services is highly valuable, innovative, and fast-growing. Second, proponents suggest that the model provides certainty for licensees. The model provides a reliable means of protecting commercial users from interference, allowing them to guarantee quality of service on a wide scale. Third, proponents argue that greater certainty will encourage investment in technology and infrastructure. Opponents cite several problems with the exclusive, flexible rights model. For example, opponents assert that the model might not promote technically efficient, or intensive, use of spectrum. According to some critics, exclusivity might reduce licensees’ incentives to invest in developing more technically efficient technologies as users have guaranteed access to spectrum, thereby deterring innovation. In addition, some opponents assert that the model could encourage “hoarding” of spectrum, as licensees could benefit from blocking access to spectrum by potential competitors. In other words, companies may buy rights to spectrum—with no intention of using the spectrum—to prevent a competitor from acquiring rights to the same spectrum. The open-access model allows a potentially unlimited number of unlicensed users to share frequency bands, with usage rights governed by technical standards, but with no rights to interference protection. This approach does not require licenses, and as such is similar to the current FCC Part 15 rules (which govern unlicensed use in the 900 MHz, 2.4 GHz, and 5.8 GHz bands)—where cordless phones and Wi-Fi technologies operate. As with exclusive, flexible rights, users would have greater latitude in determining how they use spectrum. However, in this case, markets for end-user equipment, rather than for licenses, would determine how different frequency bands are used or allocated. Under this model, commercial spectrum-based service providers would not seek to maximize their return on spectrum licenses, but rather, on the sale of equipment that, once purchased, would allow consumers to enjoy wireless services. Proponents of the open-access model cite several advantages with this approach to spectrum allocation. For example, proponents assert that the open-access model will promote the technically efficient use of spectrum. In order to avoid interference, users have an incentive to develop smarter equipment that will use the spectrum intelligently. An example of technically efficient equipment is agile radio. Agile radios can determine if a specific frequency is currently in use, emit in that band if it is not, and switch to another band in microseconds if another user begins to emit in that band. In fact, supporters of the open-access model believe that open access to spectrum will foster the development of technologies that will reduce spectrum scarcity, and therefore interference problems, as a new type of wireless architecture becomes possible. According to proponents, the open-access model for spectrum allocation also limits the ability for companies to “hoard” spectrum—that is, since there would be no exclusive use of spectrum in this model, companies could no longer block their competitors from acquiring spectrum by simply acquiring or holding on to spectrum themselves. In addition, since users would no longer need to buy spectrum rights, the open-access model reduces barriers to entry into spectrum-based markets, according to proponents. Opponents cite several problems with the open-access model. One cited problem is that an open-access approach could lead to the overuse of spectrum. Specifically, opponents believe that the technologies that could end spectrum scarcity are years away from realization. Without such technologies, an unlimited number of unlicensed users would result in the overuse of spectrum and interference. Moreover, opponents argue that the uncertainty about interference would inhibit investment. Another cited problem is the potential irreversibility of this model—that is, once consumers have the equipment, it would be difficult to prevent them from accessing the spectrum if the spectrum were needed for some other purpose in the future. One only need to imagine the difficulties involved with trying to prevent people from using their garage door openers—which operate in some bands under Part 15 rules—to understand this potential challenge. The Spectrum Policy Task Force report recommended a balanced approach to spectrum allocation—utilizing aspects of the command-and- control; exclusive, flexible rights; and open-access models. In particular, FCC’s task force recommended the following: moving away from the command-and-control model, except for limited exceptions such as public safety or to conform to treaty requirements; using the exclusive, flexible rights model where scarcity of spectrum is a concern and transaction costs are low; and using the open-access model where scarcity is a lesser concern and transaction costs are relatively high. We found little consensus on the future management of spectrum. As noted above, there is disagreement about the merits of the exclusive, flexible rights and open-access models. However, many industry stakeholders we spoke with and panelists on our expert panel support a mixed approach, which incorporates spectrum use under an exclusive, flexible rights licensed model and an open-access model. For example, those who favor open access do not all believe that licensing should suddenly be done away with, but that different approaches ought to be tested and compared before any policy decision is made. Similarly, a number of industry stakeholders we spoke with who favor providing spectrum users with flexible rights in licensed bands also believe that unlicensed spectrum is, at the minimum, appropriate for use by certain devices within certain bands. Auctions have little to no negative effect on end-user prices, infrastructure deployment, or competition, although the effect on entry and participation of small businesses is less certain. FCC’s implementation of auctions has also mitigated problems arising with comparative hearings and lotteries. In addition to auctions, secondary markets provide another means for entities to acquire licenses or lease spectrum in order to gain access to spectrum. Some critics of spectrum auctions have suggested that auctions negatively impact the wireless industry. Since auctions require licensees to pay for licenses, and in some instances the payments can represent a significant outlay, these critics believe that auctions (1) raise consumer prices as entities seek to recoup their auction payments, (2) slow infrastructure deployment by diverting financial resources to the government, (3) distort competition by creating an environment where some entities that acquired licenses via auction compete with other entities that previously acquired licenses via other means, and (4) deter entry and hinder small business participation in the wireless industry by necessitating large payments prior to the issuance of licenses. We found that FCC’s implementation of auctions has no negative impact on end-user prices, infrastructure deployment, and competition; the evidence on the impact on entry and participation of small businesses is less clear. In particular: End-user prices. We found that auctions have little to no impact on end-user prices. Economic research suggests that auction payments do not affect end-user prices, since these payments represent a sunk cost, which do not affect future-oriented decisions. For example, using data on cellular prices from 1985 to 1998, one author empirically found that auctions had no effect on prices. Additionally, industry stakeholders we spoke to and panelists on our expert panel noted that competition ultimately affects end-user prices. Thus, regardless of a company’s desire to recoup its auction payment, the company will select prices that maximize future profits based on competition in the market. Among the panelists on our expert panel, a majority said that auctions do not affect end-user prices. Specifically, 10 panelists said that auctions do not affect end-user prices, 3 said that auctions decrease prices, and 5 said that auctions increase prices. Infrastructure Deployment. We found that auctions have little to no impact on infrastructure deployment. Similar to the argument for end- user prices, economic research suggests that auction payments do not deter infrastructure deployment; companies will make decisions about infrastructure deployment based on the future profit potential of those investments. Some industry stakeholders with whom we spoke, and panelists on our expert panel, mentioned that auction payments may in fact stimulate infrastructure deployment. In particular, since an auction payment represents an investment, the company will seek a return on that investment. To earn that return, a wireless company will sell subscriber services, which are made possible through the deployment of wireless networks. Among panelists on our expert panel, eight said that auctions increase investment, five said that auctions had no effect on investment, and seven said that auctions decrease investment. Competition. We found little evidence that auctions affect the competitive environment. Many stakeholders told us that auctions generally do not place companies at a competitive or financial disadvantage compared to companies that acquired licenses through other, non-auctioned, means that might not have involved payment for the licenses, such as lotteries. These stakeholders noted that (1) companies acquired non-auctioned licenses many years ago, (2) many non-auctioned licenses have subsequently been sold and paid for, and (3) companies that acquired non-auctioned licenses have subsequently acquired additional licenses via auction. Therefore, any competitive advantage these companies gained by obtaining licenses through means other than auctions has dissipated. Among our panelists, 11 said that auctions increase the degree of competition, while 3 said that auctions had no effect on competition, and 4 said that auctions decrease competition. Entry and participation of small businesses. Some industry stakeholders we interviewed stated that auctions limit participation to large companies with extensive financial resources. These stakeholders assert that small companies are unable to acquire the financial resources necessary to successfully compete in FCC’s auction process. However, others noted that large companies also tended to dominate the comparative hearing process. In addition, some stakeholders noted that the capital intensive nature of the wireless industry—not the assignment mechanism—makes it difficult for small businesses to participate. Expert opinion diverged on this issue: among our expert panelists, eight said that auctions increase entry while another eight said that auctions decrease entry, and three panelists said that auctions had no effect on entry. As mentioned earlier, comparative hearings and lotteries—the two primary assignment mechanisms employed until 1993—suffered from several problems. Comparative hearings were generally time consuming and resource intensive, as entities employed engineers and lawyers to prepare applications and FCC dedicated staff to evaluating applications based on pre-established comparative criteria. Further, decisions arising from comparative hearings lacked transparency and often led to protracted litigation. While lotteries were less time consuming and resource intensive, they did not necessarily assign licenses to the entities that were best suited to provide services. Thus, several years could pass before the licenses were transferred in the secondary market to entities capable of deploying a wireless system and effectively using the spectrum. Further, neither comparative hearings nor lotteries provided a mechanism for the public to financially benefit from commercial entities using a valuable national resource. FCC’s implementation of auctions mitigates a number of problems associated with comparative hearings and lotteries. For example: Auctions are a relatively quick assignment mechanism. With auctions, FCC reduced the average time for granting a license to less than one year from the initial application date, compared to an average time of over 18 months with comparative hearings. Auctions are administratively less costly than comparative hearings. Entities seeking a license can reduce expenditures for engineers and lawyers arising from preparing applications, litigating, and lobbying; and FCC can reduce expenditures associated with reviewing and analyzing applications. Auctions are a transparent process. FCC awards licenses to entities submitting the highest bid rather than relying on possibly vague criteria, as was done in comparative hearings. Auctions are effective in assigning licenses to entities that value them the most. Alternatively, with lotteries, FCC awarded licenses to randomly-selected entities. Auctions are an effective mechanism for the public to realize a portion of the value of a national resource used for commercial purposes. Entities submitting winning bids must remit the amount of their winning bid to the government, which represents a portion of the value that the bidder believes will arise from using the spectrum. As mentioned earlier, auctions have generated over $14.5 billion for the U.S. Treasury. Many industry stakeholders we contacted, and panelists on our expert panel, stated that auctions are more efficient than previous mechanisms used to assign spectrum licenses. For example, among our panelists, 11 of 17 reported that auctions provide the most efficient method of assigning licenses; no panelist reported that comparative hearings or lotteries provided the most efficient method. Of the remaining panelists, several suggested that the most efficient mechanism depended on the service that would be permitted with the spectrum. While FCC’s initial assignment mechanisms provide one means for companies to acquire licenses, companies can also acquire licenses or access to spectrum through secondary market transactions. Through secondary markets, companies can engage in transactions whereby a license or use of spectrum is transferred from one company to another. These transactions can incorporate the sale or trading of licenses. In some instances, companies acquire licenses through the purchase of an entire company, such as Cingular’s purchase of AT&T Wireless. Ultimately, FCC must approve transactions that result in the transfer of licenses from one company to another. In recent years, FCC has undertaken actions to facilitate secondary-market transactions. FCC authorized spectrum leasing for most wireless radio licenses with exclusive rights and created two categories of spectrum leases: Spectrum Manager Leasing—where the licensee retains legal and working control of the spectrum—and de Facto Transfer Leasing—where the licensee retains legal control but the lessee assumes working control of the spectrum. FCC also streamlined the procedures that pertain to spectrum leasing. For instance, the Spectrum Manager Leases do not require prior FCC approval and de Facto Transfer Leases can receive immediate approval if the arrangement does not raise potential public interest concerns. While FCC has taken steps to facilitate secondary market transactions, some hindrances remain. For example, some industry stakeholders told us that the lack of flexibility in the use of spectrum can hinder secondary market transactions. Secondary markets can provide several benefits. First, secondary markets can promote more efficient use of spectrum. If existing licensees are not fully utilizing the spectrum, secondary markets provide a mechanism whereby these licensees can transfer use of the spectrum to other companies that would utilize the spectrum, thereby increasing the amount of available spectrum and reducing the perceived scarcity of spectrum. Second, secondary markets can facilitate the participation of small businesses and introduction of new technologies. For example, a company might have a greater incentive to deploy new technologies that require less spectrum if the company can profitably transfer the unused portion of the spectrum to another company through the secondary market. Also, several stakeholders we spoke to noted that secondary markets provide a mechanism whereby a small business can acquire spectrum for a geographic area that best meets the needs of the company. Industry stakeholders and panelists on our expert panel offered a number of options for improving spectrum management. The most frequently cited options include (1) extending FCC’s auction authority, (2) reexamining the distribution of spectrum—such as between commercial and government use—to enhance the efficient and effective use of this important resource, and (3) ensuring clearly defined rights and flexibility in commercially licensed spectrum bands. There was no consensus on these options for improvements among stakeholders we interviewed and panelists on our expert panel, except for extending FCC’s auction authority. Panelists on our expert panel and industry stakeholders with whom we spoke overwhelmingly supported extending FCC’s auction authority. For example, 21 of 22 of panelists on our expert panel indicated that the Congress should extend FCC’s auction authority beyond the September 30, 2007 expiration date. As mentioned earlier, panelists and stakeholders believe that auctions are more efficient than previous mechanisms used to assign spectrum licenses; moreover, auctions are viewed as being faster, less costly, and more transparent than the previous mechanisms. Additionally, extending FCC’s auction authority could generate significant revenues for the government. However, panelists and stakeholders also noted that the government should use spectrum auctions to promote the efficient use of spectrum, not necessarily to maximize revenues for the government. While panelists on our expert panel overwhelmingly supported extending FCC’s auction authority, a majority also suggested modifications to enhance the use of auctions. However, there was little consensus on the suggested modifications. The suggested modifications fall into the following three categories: Better define license rights. Some industry stakeholders and panelists indicated that FCC should better define the rights accompanying spectrum licenses, as these rights can significantly affect the value of a license being auctioned. For example, some industry stakeholders express concern with FCC assigning overlay and underlay rights to frequency bands when a company holds a license for the same frequency bands. Enhance secondary markets. Industry stakeholders we contacted and panelists on our expert panel generally believe that modifying the rules governing secondary markets could lead to more efficient use of spectrum. For example, some panelists on our expert panel said that FCC should increase its involvement in the secondary market. These panelists thought that increased oversight could help to both ensure transparency in the secondary market and also promote the use of the secondary market. Additionally, a few panelists said that adoption of a “two-sided” auction would support the efficient use of spectrum. With a two-sided auction, FCC would offer unassigned spectrum and existing licensees could make available the spectrum usage rights they currently hold. Reexamine existing small business incentives. The opinions of panelists on our expert panel and industry stakeholders with whom we spoke varied greatly regarding the need for and success of FCC’s efforts to promote economic opportunities for small businesses. For example, some panelists and industry stakeholders do not support incentive programs for small businesses. These panelists and industry stakeholders cited several reasons for not supporting these incentives, including (1) the wireless industry is not a small business industry; (2) while the policy may have been well intended, the current program is flawed; or (3) such incentives create inefficiencies in the market. Other industry stakeholders suggested alternative programs to support small businesses. These suggestions included (1) having licenses cover smaller geographic areas, (2) using auctions set aside exclusively for small and rural businesses, and (3) providing better lease options for small and rural businesses. Finally, some industry stakeholders with whom we spoke have benefited from the small business incentive programs, such as bidding credits, and believe that these incentives have been an effective means to promote small business participation in wireless markets. Panelists on our expert panel suggested a reexamination of the use and distribution of spectrum to ensure the most efficient and effective use of this important resource. One panelist noted that the government should have a good understanding of how much of the spectrum is being used. To gain a better understanding, a few panelists suggested that the government systematically track usage, perhaps through a “spectrum census.” This information would allow the government to determine if some portions of spectrum were underutilized, and if so, to make appropriate allocation changes and adjustments. A number of panelists on our expert panel also suggested that the government evaluate the relative allocation of spectrum for government and commercial use as well as the allocation of spectrum for licensed and unlicensed purposes. While panelists thought the relative allocation between these categories should be examined, there was little consensus among the panelists on the appropriate allocation. For instance, as shown in figure 3, 13 panelists indicated that more spectrum should be dedicated to commercial use, while 7 thought the current distribution was appropriate. No panelists thought that more spectrum should be dedicated to government use. Similarly, as shown in figure 4, nine panelists believed that more spectrum should be dedicated to licensed uses, six believed more should be dedicated to unlicensed uses, and five thought the current balance was appropriate. Similar to a suggested modification of FCC’s use of auctions, some panelists on our expert panel suggested better defining users’ rights and increasing flexibility in the allocation of spectrum. Better defining users’ rights would clarify the understanding of the rights awarded with any type of license, whether the licensees acquired the license through an auction or other means. In addition, some panelists stated that greater flexibility in the type of technology used—and service offered—within frequency bands would help promote the efficient use of spectrum. In particular, greater flexibility would allow the licensee to determine the efficient and highly valued use, rather than relying on FCC-based allocation and service rules. However, some panelists on our expert panel and industry stakeholders with whom we spoke noted that greater flexibility can lead to interference, as different licensees provide potentially incompatible services in close proximity. Thus, panelists on our expert panel stressed the importance of balancing flexibility with interference protection. As commercial enterprises and government agencies increasingly utilize spectrum to provide consumer services and fulfill important missions, the management of spectrum to ensure its efficient use takes on greater importance. Many industry stakeholders and panelists on our expert panel told us that the current command-and-control process for allocating spectrum is less effective than other approaches. As a result, they stated that spectrum is not being fully utilized at all times and perhaps not being used for its highest-value purposes. Yet, few stakeholders or experts agree on how to improve the process. To achieve greater consensus for reform of the spectrum management process, we previously suggested that the Congress consider establishing an independent commission that would conduct a comprehensive examination of spectrum management. One aspect of spectrum management that appears very effective is the use of auctions for assigning licenses for commercial entities. As implemented by FCC, spectrum auctions resolve problems associated with previous assignment mechanisms, while giving rise to little or no problems. Most stakeholders and experts with whom we spoke support extending FCC’s auction authority beyond the current expiration date of September 30, 2007. Given the success of FCC’s use of auctions and the overwhelming support among industry stakeholders and experts for extending FCC’s auction authority, the Congress should consider extending FCC’s auction authority beyond the current expiration date of September 30, 2007. We provided a draft of this report to FCC, NTIA, and the Office of Management and Budget for their review and comment. FCC provided technical comments that we incorporated where appropriate. NTIA had no comments on the draft. OMB concurred with our finding that auctions have mitigated problems associated with comparative hearings and lotteries and noted that the Administration supports the permanent extension of FCC’s auction authority. OMB also noted that the Administration has proposed to give FCC authority to use economic mechanisms to promote efficient spectrum use. We are sending copies of this report to the appropriate congressional committees. We are also sending this report to the Secretary of Commerce, Chairman of the Federal Communications Commission, and the Director of the Office of Management and Budget. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Should you have any questions about this report, please contact me at 202-512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Individuals making key contributions to this report include Amy Abramowitz, Stephen Brown, Emilie Cassou, Michael Clements, Nikki Clowers, Kate Magdalena Gonzalez, Eric Hudson, Terri Russell, Mindi Weisenbloom, and Alwynne Wilbur. The Commercial Spectrum Enhancement Act required us to review the Federal Communications Commission’s (FCC) commercial spectrum licensing process. The objectives of our study included examining the (1) characteristics of the current spectrum allocation process for commercial uses; (2) impact of the assignment process—specifically the adoption of auctions to assign spectrum licenses—on end-user prices, infrastructure deployment, competition, and entry and participation of small businesses; and (3) options for improving spectrum management. To address all three objectives, we conducted a comprehensive, structured literature review of economic, legal, and public policy material relevant to spectrum issues. Our literature review included domestic studies on spectrum management that were published in the last 25 years. To identify articles for our literature review, we searched a number of databases, including LexisNexis, Hein Online, Westlaw, and ProQuest, using key terms such as “spectrum,” “assignment,” and “license.” We eliminated articles and studies from our literature review that did not directly relate to our objectives or did not provide original analysis. We also considered the methodological soundness of the articles and studies included in our literature review; we determined that the findings of these studies were sufficiently reliable for our purposes. We also extracted data from FCC’s license databases (Universal Licensing System, Consolidated Database System, and International Bureau Filing System) to determine the distribution of active licenses among different segments of the wireless industry and to identify the largest holders of licenses. To assess the reliability of the information from these databases, we interviewed FCC officials responsible for the databases about their data collection and verification policies, and procedures for license information. We also electronically tested the databases. We concluded that information from FCC’s license databases was sufficiently reliable for the purposes of this report. In addition, we interviewed FCC, National Telecommunications and Information Administration, and Office of Management and Budget officials and conducted semi-structured interviews with representatives from academia and the wireless industry to obtain a broad range of perspectives on spectrum allocation and assignment issues. We selected representatives from academia and the wireless industry based on their organization’s vested interest in spectrum policy, or their expertise in spectrum policy as represented by presentations or publications. (Table 1 lists the companies, academic institutions, or other entities of the representatives we interviewed.) We also contracted with the National Academies to convene a balanced, diverse panel of experts to discuss spectrum allocation and assignment issues and options to improve spectrum management in the future. We worked closely with the National Academies to identify and select 23 panelists who could adequately respond to our general and specific questions about spectrum allocation, assignment processes, and options for improvement. In keeping with National Academies policy, the panelists were invited to provide their individual views, and the panel was not designed to reach a consensus on any of the issues that we asked them to discuss. The panelists convened at the National Academies in Washington, D.C., on August 9 and 10, 2005. Twelve panelists participated on the panel on August 9, 2005; eleven panelists participated on the panel on August 10, 2005. (See table 2 for the list of panelists on each day.) The agendas and questions were identical for both days. To start each day, the panel moderators provided an overview of the issues to be discussed; during the remainder of the day, the panelists addressed the questions we had provided for their consideration. At the end of the each session, we asked the panelists to individually answer a short series of questions about the topics discussed in order to more systematically capture individual panelists’ views on key dimensions. We did not verify the panelists’ statements, although we did ask the panelists, in some instances, to clarify certain details. The views expressed by the panelists do not necessarily represent the views of GAO or the National Academies. After the expert panel was conducted, we analyzed a transcript of the panel’s discussion and survey responses in order to identify principal themes and panelists’ views. The results of the expert panel should be interpreted in the context of two key limitations and qualifications. First, although we were able to secure the participation of a balanced, highly qualified group of experts, there are other experts in this field who could not be included because of the need to limit the size of the panel. Although many points of view were represented, the panel was not representative of all potential views. Second, even though we conducted preliminary research, in cooperation with The National Academies, and heard from national experts in their fields, two panels cannot represent the current practice in this vast arena. More thought, discussion, and research must be done to develop greater agreement on what is really known, what needs to be done, and how to do it. These two key limitations and qualifications provide contextual boundaries. Nevertheless, the panel provided a rich dialogue on spectrum allocation and assignment issues, as well as options for improving spectrum management in the future; the panelists also provided insightful comments in responding to the questions posed to the panel.
The radio-frequency spectrum is a natural resource used to provide an array of wireless communications services, such as television broadcasting, which are critical to the U.S. economy and national security. In 1993, the Congress gave the Federal Communications Commission (FCC) authority to use competitive bidding, or auctions, to assign spectrum licenses to commercial users. The Commercial Spectrum Enhancement Act required GAO to examine FCC's commercial spectrum licensing process. Specifically, GAO examined the (1) characteristics of the current spectrum allocation process for commercial uses; (2) impact of the assignment process--specifically the adoption of auctions to assign spectrum licenses--on end-user prices, infrastructure deployment, competition, and entry and participation of small businesses; and (3) options for improving spectrum management. The current spectrum allocation process is largely characterized as a "command-and-control" process, in which the government largely dictates how the spectrum is used. Many stakeholders we spoke with, along with panelists on our expert panel, identified a number of weaknesses of the existing spectrum allocation process, including that the current process is slow and leads to underutilization of the spectrum. FCC staff have identified two alternative allocation models: the "exclusive, flexible rights" model--which would extend the existing process by providing greater flexibility to spectrum license holders--and the "open-access" (or "commons") model--which would allow an unlimited number of unlicensed users to share spectrum. While little consensus exists about fully adopting either alternative model, FCC staff, as well as many stakeholders and panelists on our expert panel, recommend a balanced approach that would combine elements of the current process and the two alternative models. FCC's use of auctions to assign spectrum appears to have little to no negative impact on end-user prices, infrastructure deployment, and competition; evidence on how auctions impact the entry and participation of small businesses is less clear. Additionally, FCC's implementation of auctions has mitigated problems associated with comparative hearings and lotteries, which FCC previously used to assign licenses. In particular, auctions are quicker, less costly, and more transparent. Finally, secondary markets provide an additional mechanism for companies to acquire licenses and gain access to spectrum, and FCC has undertaken actions to facilitate secondary-market transactions, such as streamlining the approval process for leases. Industry stakeholders and panelists on our expert panel offered a number of options for improving spectrum management. The most frequently cited options include (1) extending FCC's auction authority, (2) reexamining the use and distribution of spectrum--such as between commercial and governmental use--to enhance the efficient and effective use of this important resource, and (3) ensuring flexibility in commercially licensed spectrum bands. Stakeholders and panelists on our expert panel overwhelmingly supported extending FCC's auction authority; however, there was little consensus on the other identified options for improvement.
The performance and accountability of government agencies and programs have attracted substantial attention by Congress, the executive branch, and others, including GAO. For example, the Government Performance and Results Act of 1993 (GPRA) established a statutory framework designed to provide congressional and executive decision makers with objective information on the relative effectiveness and efficiency of federal programs and spending. A central element of the current administration’s Presidential Management Agenda is the Program Assessment Rating Tool (PART), designed by OMB to provide a consistent approach to assessing federal programs in the executive budget formation process. Over the past 2 years, we have emphasized that the long-term fiscal imbalance facing the United States and other significant trends and challenges establish the need to reexamine the base of the federal government and its existing programs, policies, functions, and activities. We noted that a top-to-bottom review of federal programs and policies is needed to determine if they are meeting their objectives. To support this reexamination, the policy process must have the capacity to provide policymakers not only with information to analyze the performance and results achieved by specific agencies and programs, but that of broad portfolios of programs and tools (including regulation) contributing to specific policy goals. While initiatives such as GPRA and PART can evaluate regulatory performance at the agency or program level, Congress and presidents also have instituted requirements that focus on a more basic element, the agencies’ existing regulations. For example, through Section 610, Congress requires agencies to review all regulations that have or will have a “significant economic impact upon a substantial number of small entities” (generally referred to as SEISNOSE) within 10 years of their adoption as final rules. The purpose of these reviews is to determine whether such rules should be continued without change, or should be amended or rescinded, consistent with the stated objectives of applicable statutes, to minimize impacts on small entities. As discussed later in this report, Congress also established other requirements for agencies to review the effects of regulations issued under specific statutes, such as the Clean Air Act. Every president since President Carter has directed agencies to evaluate or reconsider existing regulations. For example, President Carter’s Executive Order 12044 required agencies to periodically review existing rules; one charge of President Reagan’s task force on regulatory relief was to recommend changes to existing regulations; President George H.W. Bush instructed agencies to identify existing regulations to eliminate unnecessary regulatory burden; and President Clinton, under section 5 of Executive Order 12866, required agencies to develop a program to “periodically review” existing significant regulations. In 2001, 2002, and 2004, the administration of President George W. Bush asked the public to suggest reforms of existing regulations. The Office of Advocacy within SBA and OIRA within OMB have issued guidance to federal agencies on the implementation of, respectively, the RFA and Executive Order 12866. The available guidance documents, including OMB Circular A-4 on regulatory analysis, focus primarily on the procedural and analytical steps required for reviewing draft regulations, but they are also generally applicable whenever agencies analyze the benefits and costs of regulations, including those of existing regulations. However, the documents provide limited guidance focused specifically on retrospective reviews. In a short discussion on periodic reviews of existing rules pursuant to Section 610, the Office of Advocacy’s RFA guidance briefly touches on planning, conducting, and reporting the results of reviews, but the OMB/OIRA guidance does not specifically address the executive order’s periodic review requirement. In our prior work on this subject, we found that agencies infrequently performed certain types of reviews and identified potential challenges and benefits of conducting retrospective reviews. In 1999, we reported that assessments of the costs and benefits of EPA’s regulations after they had been issued had rarely been done. In a series of reports on agencies’ compliance with Section 610 requirements, we again noted that reviews were not being conducted. We identified a number of challenges to conducting retrospective reviews. In general, these included the difficulties that regulatory agencies face in demonstrating the results of their work, such as identifying and collecting the data needed to demonstrate results, the diverse and complex factors that affect agencies’ results (for example, the need to achieve results through the actions of third parties), and the long time period required to see results in some areas of federal regulation. We also identified concerns about the balance of regulatory analyses, because it may be more difficult to estimate the benefits of regulations than it is to estimate the costs. Our report on EPA’s retrospective studies noted that such studies were viewed as hard to do because of the difficulty in obtaining valid cost data from regulated entities and quantifying actual benefits, among other reasons. Our work on agencies’ implementation of Section 610 requirements revealed that there was confusion among the agencies regarding the meaning of key terms such as SEISNOSE, what RFA considers to be a “rule” that must be reviewed, and whether amending a rule within the 10-year period provided in Section 610 would “restart the clock,” among other things. However, our prior work also pointed out that retrospective evaluation could help inform Congress and other policymakers about ways to improve the design of regulations and regulatory programs, as well as play a part in the overall reexamination of the base of the federal government. For example, we reported that retrospective studies provided insights on a market-based regulatory approach to reduce emissions that cause acid rain and that the studies found that the actual costs of reducing emissions were lower than initially estimated. Experts and stakeholders whom we consulted during work on federal mandates (including regulations) and economic and regulatory analyses told us that they believe more retrospective analysis is needed and, further, that there are ways to improve the quality and credibility of the analyses that are done. One particular reason cited for the usefulness of retrospective reviews was that regulations can change behavior of regulated entities, and the public in general, in ways that cannot be predicted prior to implementation. Since 2001, the nine selected agencies conducted multiple retrospective reviews of their existing regulations to respond to mandatory and discretionary authorities. Between 2001 and 2006, the nine agencies reported completing over 1,300 mandatory or discretionary reviews, but many could not tally the total number of reviews that they have conducted. The reviews addressed multiple purposes, such as examining the efficiency and effectiveness of regulations and identifying opportunities to reduce regulatory burden. The mix of reviews conducted—in terms of the impetus to start a review and the purpose of the review—varied not only across but also within the agencies that we examined. Agencies conducted reviews in response to various mandatory requirements, but most agencies indicated that they conducted the majority of reviews based on their own discretion. Agencies reported conducting reviews, at their own discretion, in response to their agencies’ own formal policies and procedures to conduct reviews or to respond to accidents or similar events, changes in technology and market conditions, advances in science, informal agency feedback, and petitions, among other things. Among the main mandatory sources of reviews were governmentwide statutory requirements (such as Section 610 of the RFA or the Paperwork Reduction Act (PRA)), agency- or program-specific statutory requirements (such as Section 402 of the Telecommunications Act of 1996 that requires FCC to review the regulations promulgated under the act every 2 years), Executive Order 12866 on Regulatory Planning and Review (which requires agencies to periodically conduct reviews of their significant regulations), and other executive branch directives (such as the memorandum on plain language). In addition, agencies conducted reviews in response to OMB initiatives to solicit nominations for regulatory reexamination, which were not statutorily mandated reviews or required by a specific executive order, but were a part of executive branch regulatory reform efforts. The frequency of agency reviews varied based on review requirements. In some cases, agencies were required to conduct reviews every 2 years to 10 years. Available information on completed reviews indicated that the numbers of reviews completed by individual agencies in any given year varied from a few to more than a hundred. Agencies’ officials reported they conducted discretionary reviews more often than mandated studies. These discretionary reviews were often prompted by drivers such as informal suggestions or formal petitions from regulated entities seeking regulatory changes, suggestions from the agency personnel who implement and enforce the regulations, departmentwide initiatives to review regulations, or changes in particular technologies, industries, or markets. Although these discretionary reviews were often undocumented, our review of publicly available results for retrospective reviews confirmed that, in some instances, agencies like those within DOL, DOT, and DOJ cited discretionary drivers as the motivation for their reviews. Among the various reasons for agency-initiated reviews, petitions were a major review driver for almost all of the agencies. Agency officials cited three types of petition activities that largely influenced them to conduct regulatory reviews. These petition activities included: (1) petitions for rulemaking where the regulated entities or public requested a modified rule that includes new or updated information, (2) petitions for reconsideration where regulated entities or the public requested that the agency revisit an aspect of the rule, and (3) petitions for waivers where the regulated entities or public requested waivers from regulatory requirements. Some agencies, such as DOT and MSHA, have policies to begin a more formal review of the entire regulation when there are a substantial number of petitions. Agencies also reported conducting reviews in response to various mandatory requirements. However, whether they conducted them more often in response to governmentwide requirements or requirements specific to the policy area or sector that they regulate varied among and within agencies. For example, DOT, USDA’s APHIS, and SBA conducted many mandatory reviews in response to Section 610 requirements, while others, such as EPA and FCC, conducted most mandatory reviews in response to statutes that applied specifically to areas that they regulate. Specifically, all of the mandatory reviews completed by APHIS and SBA since 2001 were conducted in response to Section 610 requirements. Similarly, DOT initiated a systematic 10-year plan for reviewing all of its sections of the CFR in order to satisfy the provisions of Section 610, along with other mandatory review requirements. DOT reported completing over 400 reviews between 2001 and 2006 under this 10-year plan. However, even within DOT, variation existed with regard to which review requirements more often prompted reviews. For example, unlike some other DOT agencies, FAA conducted the majority of its mandatory reviews in response to industry-specific review requirements rather than governmentwide retrospective review mandates. While EPA, FCC, and FDIC also conduct some reviews in response to governmentwide requirements, such as Section 610, they most often conducted mandatory reviews to comply with statutes that apply specifically to areas that they regulate. EPA officials provided a list of seven statutes that require the agency to conduct mandatory retrospective reviews, including requirements in the Safe Drinking Water Act, Clean Air Act, Clean Water Act, and Federal Food, Drug and Cosmetic Act, among others. Similarly, FCC conducts many of its mandatory retrospective reviews to comply with the biennial and quadrennial regulatory review requirements under the Communications Act, as amended. One agency in our review, FDIC, conducted most of its mandatory reviews in response to the financial-sector-specific Economic Growth and Regulatory Paperwork Reduction Act of 1996 (EGRPRA), which requires federal financial regulatory agencies to identify outdated, unnecessary, or unduly burdensome statutory or regulatory requirements every 10 years. Finally, agencies also conducted single comprehensive reviews of a regulation as a method to satisfy multiple review requirements. For example, DOL’s EBSA and OSHA, DOT, and FDIC conducted reviews that incorporated Section 610 and other review requirements into broader reviews that the agency initiated as part either of their regular review program or in response to industry feedback and petitions, among other things. (Table 1 illustrates the range of mandatory and discretionary reviews conducted by selected agencies included in our scope.) The frequency with which agencies were required to conduct certain reviews varied depending on the statutory requirement. For example, under some statutory requirements, agencies must review certain regulations every 2 or 3 years. Other requirements cover a longer period, such as Section 610’s requirement to revisit certain rules 10 years after their issuance, or specify no particular time. In addition, agency policies on the frequency with which to conduct discretionary reviews varied. For example, USDA has a departmentwide requirement generally to review economically significant and major regulations every 5 years, while FAA conducts its reviews on a 3-year cycle. Some agencies, such as DOT, had a departmentwide requirement to review all regulations in its CFR within 10 years of the creation of its Unified Agenda Regulatory Plan, but provided discretion to its agencies on when to conduct reviews during that period. Despite a perception expressed by some federal and nonfederal parties that agencies are not actively engaged in reviewing their existing regulations, we found that agencies reported results for over 1,300 reviews completed between 2001 through 2006. However, even this number may understate the total because it does not account for all of the undocumented discretionary reviews conducted by agencies. In addition, the available information reported by the agencies (and others, such as OMB) may include some duplication or fail to capture additional follow-up reviews by the agencies. It is also important to note that the units of analysis that agencies used in their reviews, and the scope of individual reviews, varied widely. Therefore, the level of agency review activity cannot be compared by only assessing the number of reviews completed. For example, a review might be as narrowly focused as the review that DOT’s NHTSA completed in 2005 on 49 CFR part 571.111 (regarding rearview mirrors) or as broad as a FDIC review that included analyses of 131 regulations within a single review. DOT also pointed out that because rules vary widely in complexity, even a narrowly focused review can be a major undertaking. For example, NHTSA may review a one-sentence rule that says, “all motor vehicles sold in the U.S. must be equipped with airbags.” In another review, they may look at 131 regs that impose very minor requirements. It may well be a far more time and resource intensive effort for NHTSA to review the effect of the one-sentence airbag requirement. Further, because some agencies produce many more regulations than others do, the number of reviews that agencies reported conducting also should be considered within the context of their volume of regulations and rulemaking activity. Table 2 lists the number of reviews that agencies reported completing between 2001 and 2006. According to agency officials, they conducted reviews for various purposes but most often focused on assessing the effectiveness of regulations. Agency officials reported conducting reviews to evaluate or identify (1) the results produced by existing regulations, including assessments to validate the original estimates of projected benefits and costs associated with the regulation; (2) ways to improve the efficiency or effectiveness of regulatory compliance and enforcement; and (3) options for reducing regulatory burdens on regulated entities. Overall, agency officials reported that their reviews more often focused on improving effectiveness, with burden reduction as a secondary consideration, even for review requirements that are geared toward identifying opportunities for burden reduction, such as those within Section 610. The approaches that agencies reported taking to assess “effectiveness” varied, including measuring whether the regulations were producing positive outcomes, facilitated industry compliance with relevant statutes, and/or were assisting the agency in accomplishing its goals. For example, DOL officials reported that the agency attempts to assess “effectiveness” by measuring improvements resulting from the regulation and by analyzing factors required by Section 610 of RFA and Section 5 of Executive Order 12866, such as whether there is new technology, excessive complexity, conflict with other regulations, or whether cost-effectiveness can be improved. However, the agency does not conduct what would be considered a traditional benefit-cost economic analysis. Other agencies, such as EPA, reported assessing “effectiveness” by determining whether the regulation is achieving its intended goal as identified in related statutes. For example, officials from EPA reported that their retrospective review of the Clean Water Act helped them estimate the extent to which toxic pollutants remained, thereby aiding them to assess the effectiveness of each existing regulation in various sections of the act. Since the goal of the Clean Water Act is zero discharge of pollutants, the review was an indicator of the progress the agency had made toward the statutory goals. Our limited review of agency summaries and reports on completed retrospective reviews revealed that agencies’ reviews more often attempted to assess the effectiveness of their implementation of the regulation rather than the effectiveness of the regulation in achieving its goal. The use of systematic evaluation practices and the development of formal retrospective regulatory review processes varied among and even within the agencies. To assess the strengths and limitations of various agency review processes, we examined the three phases of the process: the selection of regulations to review, conduct of reviews, and reporting of review results. We identified three practices that are important for credibility in all phases of the review, including the extent to which agencies (1) employed a standards-based approach, (2) involved the public, and (3) documented the process and results of each phase of the review. The use of these evaluation practices in agency review processes, and the development of formal policies and procedures to guide their review processes, varied among the agencies. Furthermore, whether agencies used these practices often varied according to whether they conducted discretionary or mandatory reviews. Fewer agencies used standards-based approaches or documented the selection, conduct, or reporting of reviews when they were discretionary. While more agencies incorporated public involvement in the selection of regulations for review in discretionary reviews, fewer included public involvement in the conduct of those reviews. Generally, agencies did not consistently incorporate the three practices we identified into their discretionary reviews as often as they did for their mandatory reviews. However, it is important to note that some agencies have recently developed their review programs and others are attempting to find ways to further develop and improve their retrospective review processes (i.e., establishing units that focus on retrospective reviews and seeking assistance with establishing prioritization systems). For example, CPSC and EBSA recently established their review programs, and ETA recently established a centralized unit to develop retrospective review processes for the agency. Furthermore, although the process has been delayed because of other regulatory priorities, MSHA recently sought contractor assistance with developing a more standard selection process for its reviews. (Additional details on each agency’s review process can be seen in apps. II through XI). All of the agencies in our review reported that they have practices in place to help them select which of their existing regulations to review. Some of these agencies have established or are establishing standard policies and procedures, and guidance for this selection. While almost all of the agencies reported having standards to select regulations for their mandated reviews because the mandates identified which regulations agencies must review or prescribed standards for selecting regulations to review, fewer agencies had yet developed formal standards for selecting regulations to review under their own discretion. Agencies that had established such processes reported that they were useful in determining how to prioritize agency review activities. For example, DOL’s EBSA and CPSC established detailed standards for the selection of their discretionary reviews, which they used to prioritize their retrospective review activities and the corresponding use of agency resources. The officials reported that their standards-based selection processes allowed them to identify which regulations were most in need of review and to plan for conducting those reviews. Furthermore, officials from both agencies reported that their prioritization processes allowed them to focus on more useful retrospective review activities, which resulted in identifying important regulatory changes. We observed that this standards- based approach to selecting regulations also increased the transparency of this phase of the review process. We were better able to determine how these agencies selected which regulations to review. Further, as identified by CPSC officials and others, applying a standards-based approach to selecting regulations to review can provide a systematic method for agencies to assess which regulations they should devote their resources toward, as they balance retrospective review activities with other mission- critical priorities. The consequence of not using a standards-based approach could result in diverting attention from regulations that, through criteria, agencies could identify as needing the most consideration. Otherwise, agencies may instead focus their reviews on regulations that would warrant less attention. In addition—for each phase of the review process—using a standards-based approach can allow agencies to justify the appropriateness of the criteria that they use (either because they are objective, or at least mutually agreed upon), and thus gain credibility for their review. Selecting a different criterion or set of standards for each review could imply a subjective evaluation process and possibly an arbitrary treatment of one regulation versus another. Similarly, agencies varied in their use of a standards-based approach when analyzing regulations in reviews. While most of the mandatory review requirements identified by the selected agencies establish standard review factors that agencies should consider when conducting their reviews— which agencies reported following—about half of the agencies within our review have formal policies and procedures that establish a standards- based approach for conducting discretionary reviews. Specifically, the five agencies that had formal procedures that established standards defined the steps needed to conduct reviews and review factors used to assess regulations. Other agencies had not yet established written guidance or policies to guide their conduct of reviews or define the analytical methods and standards they should use to assess regulations. When we identified agencies that did specify objectives and review factors that they used when conducting reviews, they ranged from general (such as to identify if they were still needed or could be simplified) to specific (such as to account for new developments in practices, processes, and control technologies when assessing emission standards). We observed that the more specific sets of evaluative factors that agencies considered in different types of reviews shared several common elements, such as prompting agencies to consider comments and complaints received by the public and the regulation’s impact on small entities. Our assessment of a small sample of agency reviews revealed that, even when relevant standards were available to agencies for conducting reviews, they did not always apply them. In one case, the economic analyses conducted in the agency review did not employ some of the relevant best practices identified for these types of analyses in OMB’s guidance. In another case, in conducting a mandated Section 610 review, the agency relied only on public comments to evaluate the regulation and did not provide any additional assessment on the other four factors identified in the mandatory review requirements. According to agency documentation, it (1) provided no further assessment of the factors and (2) concluded that the regulation did not need changes, because the agency received no public comments. Conversely, our review of another agency’s Section 610 assessment of regulations provided an example of how agencies that apply standards can use the reviews to produce substantive results. Because the agency relied both on public comments and its own assessment of the five Section 610 review factors/standards, the agency identified additional changes that would not have been identified if it had relied on only one of these standards. When we assessed whether agencies applied other generally accepted review standards, such as identifying how well they implemented the regulation, whether there was a pressing need for the regulation, or whether intended or unintended effects resulted from the regulation, we observed that some agencies’ analyses did not. Most of the agencies within our review had established policies on reporting the results of their mandatory retrospective reviews to senior management officials and to the public, although in some cases, there was no requirement to do so. For example, Section 610 requires federal agencies to report on the initiation of a review, but does not require agencies to report the findings or policy decisions resulting from the review. Nevertheless, as a matter of practice, all of the agencies within our review reported information on the results of their Section 610 reviews, though the content and level of detail varied. Conversely, about half of the agencies in our review had not established written policies or procedures for reporting the results of their discretionary retrospective reviews to the public. As a result, we found inconsistencies in practices within agencies on how and whether they reported the results of these reviews to the public, resulting in less transparency in this process. For example, agencies within DOT, USDA, and some within DOL indicated that, at times, they report review results of discretionary reviews in the preambles of proposed rule changes or through other mechanisms, but they did not consistently do so. Agencies also reported that they often do not report the results of discretionary reviews at all, if they did not result in a regulatory change. This lack of transparency may be the cause for assertions from nonfederal parties that they were unaware that agencies conducted discretionary reviews of their existing regulations. Figure 1 illustrates the differences in agencies’ use of a standards-based approach in the three phases of the review process, for discretionary and mandatory reviews. Agency practices varied for soliciting and incorporating public input during the selection of regulations to review. For example, in 2005 DOT formally requested nominations from the public on which regulations it should review. Further, in the 2006 semi-annual Unified Agenda, the agency sought public suggestions for which regulations it should review. However, agencies in our review more often reported that they solicit public input on which regulations to review during informal meetings with their regulated entities for their discretionary reviews. Techniques used by some of the agencies to obtain public input were informal networks of regulated entities, agency-sponsored listening sessions, and participation in relevant conferences. For example, USDA officials reported that they regularly meet with industry committees and boards and hold industry listening sessions and public meetings to obtain feedback on their regulations. DOJ’s DEA reported holding yearly conferences with industry representatives to obtain their feedback on regulations. While almost all of the agencies in our review reported soliciting public input in a variety of ways for their discretionary reviews, SBA relied primarily on the Federal Register’s Unified Agenda to inform the public about their reviews. For mandatory reviews, agencies appeared to do less outreach to obtain public input on the selection of regulations to review. However, it is important to note that such public input into the selection phase of mandated reviews may not be as appropriate because agencies have less discretion in choosing which regulations to review under specific mandates. Almost all agencies within our review reported soliciting public input into the conduct of their mandatory reviews, either through the notices in the Federal Register, or in more informal settings, such as roundtable discussions with industries, or both. For these reviews, agencies appeared to almost always place notices in the Federal Register, including the semiannual Unified Agenda, soliciting public comments, but nonfederal parties cited these tools as being ineffective in communicating with the public because these sources are too complicated and difficult to navigate. Our review of the Federal Register confirmed that agencies often provided such notice and opportunity for public comment on these regulatory reviews. In addition, some agencies, such as DOJ’s ATF, USDA’s AMS, and FCC, reported on the agencies’ analysis of the public comments and their effect on the outcome of the review. However, we were not always able to track such notices or discussions of public input into the conduct of discretionary reviews. Officials from DOL’s MSHA and ETA stated that, if internal staff generated the review and it was technical in nature, the agency might not include the public when conducting a review. However, we were able to observe that some agencies, such as FCC, DOT, and EPA, did post comments received from petitioners and solicited public comments on these types of discretionary reviews in order to inform their analyses. Most agencies within our review did not solicit or incorporate public input on the reported results of their reviews. We are only aware of a few agencies (NHTSA, FCC, and EPA) that provided opportunities for additional public feedback on the analysis of their regulations before making final policy decisions. Figure 2 illustrates the difference in agencies’ incorporation of public involvement in the three phases of the review process, for discretionary and mandatory reviews. Agency documentation for selecting regulations to review varied from detailed documentation of selection criteria considered to no documentation of the selection process. For example, DOL’s EBSA documented its selection process, including selection criteria used, in detail in its Regulatory Review Program. However, agencies did not always have written procedures for how they selected regulations for discretionary reviews. SBA officials, for example, were not able to verify the factors they considered during the selection of some regulations that they reviewed because employees who conducted the reviews were no longer with the agency and they did not always document their review process. The officials indicated that the agency considered whether the regulations that they reviewed under Section 610 requirements were related to or complement each other, but did not always document selection factors for discretionary reviews. This lack of documentation was particularly important for SBA because the agency reported having high staff turnover that affected maintaining institutional knowledge about retrospective regulatory review plans. For example, officials reported that within the 8(a) Business Development program, there is a new Director almost every 4 years who sets a new agenda for retrospective regulatory review needs. However, because of other pressing factors, these reviews are often not conducted. Consequently, we conclude that this lack of documentation may result in duplicative agency efforts to identify rules for review or not including rules that the agency previously identified as needing review. Agency documentation of the analyses conducted in reviews ranged from no documentation to detailed documentation of their analysis steps in agency review reports. While some agencies reported the analysis conducted in great detail in review reports, others summarized review analysis in a paragraph or provided no documentation of review analysis at all. Some agencies did not provide detailed reports because they did not conduct detailed analyses. For example, SBA officials reported that, for Section 610 reviews, they do not conduct any additional analysis of the regulation if the public does not comment on the regulation. Our assessment of a sample of agency review reports revealed that, even for some reviews that provided a summary of their analysis, we could not completely determine what information was used and what analysis the agency conducted to form its conclusions. Further, agencies in our reviews reported that they less often documented the analysis of those reviews conducted on a discretionary basis. One SBA official acknowledged that it would be helpful if the agency better documented its reviews. For each of the agencies we reviewed, we were able to find reports on the results of some or all of their completed reviews. Nonetheless, the content and detail of agency reporting varied, ranging from detailed reporting to only one-sentence summaries of results. Some agencies told us that they typically only document and report the results if their reviews result in a regulatory change. Further, officials from many agencies primarily reported conveying the results of only mandatory reviews to the public. Agencies employed a variety of methods to report review results to the public, but more often used the Federal Register and Unified Agenda. Although agencies in our review often reported the results of their mandatory reviews by posting them in the Federal Register, agencies like OSHA, CPSC, FCC, and those within DOT also made some or all their review reports available on their Web sites. During our joint agency exit conference, officials indicated that agencies could do more to report their review analysis and results to a wider population of the public by using the latest information technology tools. Specifically, they said that agencies could: (1) use listserves to provide reports to identified interested parties, (2) make review analysis and results more accessible on agency Web sites, and (3) share results in Web-based forums, among other things. Nonfederal parties also reported that agencies could improve their efforts to report review results to the public and cited similar communication techniques. Additionally, nonfederal parties reported that agencies could improve communication by conducting more outreach to broad networking groups that represent various stakeholders, such as the Chamber of Commerce, the National Council of State Legislators, and the Environmental Council of States, and tailoring their summary of the review results to accommodate various audiences. Figure 3 illustrates the differences in agencies’ use of documentation in the three phases of the review process, for discretionary and mandatory reviews. Agency reviews of existing regulations resulted in various outcomes— from amending regulations to no change at all—that agencies and knowledgeable nonfederal parties reported were useful. Mandatory reviews most often resulted in no changes to regulations. Conversely, agency officials reported that their discretionary reviews more often generated additional action. Both agency officials and nonfederal parties generally considered reviews that addressed multiple purposes more useful than reviews that focused on a single purpose. Agency reviews of existing regulations resulted in various outcomes including: changes to regulations, changes or additions to guidance and other related documents, decisions to conduct additional studies, and validation that existing rules were working as planned. Agencies and nonfederal parties that we interviewed reported that each of the outcomes could be valuable to the agency and the public. However, for the mandatory reviews completed within our time frame, the most common result was a decision by the agency that no changes were needed to the regulation. There was a general consensus among officials across the agencies that the reviews were sometimes useful, even if no subsequent actions resulted, because they helped to confirm that existing regulations were working as intended. Officials of some agencies further noted that, even when mandatory reviews do not result in changes, they might have already made modifications to the regulations. Our examinations of selected completed reviews confirmed that this is sometimes the case. Among the various outcomes of retrospective reviews were changes to regulations, changes or additions to guidance and other related documents, decisions to conduct additional studies, and validation that existing rules were working as planned. Agencies and nonfederal parties that we interviewed reported that each of the outcomes could be valuable to the agency and the public. In our review of agency documentation, we confirmed that some reviews resulted in regulatory actions that appeared useful. Our review of agency documentation confirmed that some reviews can prompt potentially beneficial regulatory changes. For example, OSHA’s review of its mechanical press standard revealed that the standard had not been implemented since its promulgation in 1988 because it required a validation that was not available to companies. Consequently, OSHA is currently exploring ways to revise its regulation to rely upon a technology standard that industries can utilize and that will provide for additional improvements in safety and productivity. Although some reviews appeared to result in useful changes, the most common result for mandatory reviews was a decision by the agency that no changes were needed to the regulation. However, there was a general consensus among officials across the agencies that such decisions are still sometimes useful because they helped to confirm that existing regulations were working as intended. Officials of some agencies further noted that, even when mandatory reviews do not result in changes, they might have already made modifications to the regulations. Our examinations of selected completed reviews confirmed that this is sometimes the case. Agency officials reported that their discretionary reviews resulted in additional action—such as prompting the agencies to complete additional studies or to initiate rulemaking to amend the existing rule—more often than mandatory reviews. In particular, officials from USDA’s AMS and FSIS, FCC, SBA, EPA, DOJ, and DOT reported that Section 610 reviews rarely resulted in a change to regulations. Although AMS has initiated 19 Section 610 reviews since 2001, AMS officials reported that, because of their ongoing engagement with the regulated community, these reviews did not identify any issues that the agency did not previously know, and therefore resulted in no regulatory changes. Similarly, none of the Section 610 reviews conducted by SBA and DOL’s EBSA resulted in changes to regulations, and few changes resulted from Section 610 reviews conducted by EPA, FCC, DOJ, and DOT. The one apparent outlier in our analysis was FDIC, which conducted many of its reviews in response to the financial- sector-specific burden reduction requirement in EGRPRA. According to FDIC officials and the agency’s 2005 annual report, reviews conducted in response to this mandate resulted in at least four regulatory changes by the agency since 2001 and over 180 legislative proposals for regulatory relief that FDIC and other members of the FFIEC presented to Congress. The legislative proposals led to the passage of the Financial Services Regulatory Relief Act, which reduced excessive burden in nine areas in the financial sector. In addition, our analyses of the December 2006 Unified Agenda revealed that FDIC attributed four of its nine proposed or initiated modifications to existing regulations to statutory mandates. Most agencies’ officials reported that reviews they conduct at their own discretion—in response to technology and science changes, industry feedback, and petitions—more often resulted in changes to regulations. As one of many examples, EBSA officials reported that because the reviews initiated and conducted by the agency to date have been precipitated by areas for improvement identified by the regulated community or the agency, virtually all the reviews have resulted in changes to the reviewed rules. They reported that, in general, these changes have tended to provide greater flexibility (e.g., the use of new technologies to satisfy certain disclosure and recordkeeping requirements) or the streamlining and/or simplifying of requirements (e.g., reducing the amount of information required to be reported). Similarly, DOT officials and other agencies’ officials reported that reviews that they conduct in response to industry and consumer feedback and harmonization efforts also resulted in changes to regulations more often than mandated reviews. In addition, some agencies also reported that reviews that incorporated review factors from both their mandatory requirements and factors identified by the agency in response to informal feedback often resulted in useful regulatory changes. These agencies’ reviews incorporated factors identified by the agency as well as ones that were requirements in mandatory reviews. For example, DOL’s OSHA and EBSA selected regulations for review based upon criteria that they independently identified and selection criteria identified by Section 610 requirements. They also incorporated review factors listed in Section 610 requirements into a broader set of evaluative factors considered during their discretionary reviews, including assessing: (1) whether the regulation overlaps, duplicates, or conflicts with other federal statutes or rule; and (2) the nature of complaints against the regulation. As a result, they reported that these reviews generated useful outcomes. Nonfederal parties also indicated that reviews that focus on multiple review factors and purposes are more useful than reviews that focus only one purpose, such as only burden reduction or only enforcement and compliance or only one factor, such as public comments. Because agencies did not always document discretionary reviews that they conducted, it is not possible to measure the actual frequency with which they resulted in regulatory change. However, we observed that, for cases where agencies reported modifications to regulations, these actions were most often attributed to factors that agencies addressed at their own discretion, such as technology changes, harmonization efforts, informal public feedback, and petitions. For example, although EPA officials reported that they have many mandatory regulatory review requirements, our review of proposed or completed modifications to existing regulations reported in the December 2006 Unified Agenda showed that 63 of the 64 modifications reported were attributed to reasons associated with agencies’ own discretion. As illustrated in figure 4, other agencies within our review had similar results. Although agencies reported, and our analysis of the Unified Agenda indicated, that agencies more often modify existing regulations for reasons attributed to their own discretion, it is important to note that mandatory reviews may serve other valuable purposes for Congress. Such reviews may provide Congress with a means for ensuring that agencies conduct reviews of regulations in policy areas that are affected by rapidly changing science and technology and that agencies practice due diligence in reviewing and addressing outdated, duplicative, or inconsistent regulations. For example, Congress required FCC to conduct reviews of its regulations that apply to the operation or activity of telecommunication service providers to “determine whether any such regulation is no longer necessary in the public interest as the result of meaningful economic competition between providers of such service.” Agencies’ officials reported that reviews often had useful outcomes other than changes to regulations, such as changes or additions to guidance and other related documents, decisions to conduct additional studies, and validation that existing rules were working as planned. For example, OSHA officials reported that, outside of regulatory changes, their reviews have resulted in recommended changes to guidance and outreach materials and/or the development of new materials or validation of the effectiveness of existing rules. Our review of OMB’s regulatory reform nominations process confirmed that at least four of OSHA’s reviews conducted in response to OMB’s manufacturing reform initiative resulted in changes to or implementation of final guidance or the development of a regulatory report. We observed similar results from OMB’s regulatory reform process for EPA. Similarly, DOT officials reported that their reviews also often led to changes in guidance or in further studies, and our examination of review results reported by DOT confirmed that this was often the case. Moreover, all of the agencies within our review reported that reviews have resulted in validating that specific regulations produced the intended results. Agencies’ officials reported that barriers to their ability to conduct and use reviews included: (1) difficulty in devoting the time and staff resources required for retrospective review requirements, (2) limitations on their ability to obtain the information and data needed to conduct reviews, and (3) constraints in their ability to modify some regulations without additional legislative action, among other important factors. Both agencies and nonfederal parties identified the lack of public participation in the review process as a barrier to the usefulness of reviews. The nonfederal parties also identified the lack of transparency in agency review processes as a barrier to the usefulness of reviews. Agency officials and nonfederal parties also suggested a number of practices that could facilitate conducting and improving the usefulness of regulatory reviews, including: (1) development of a prioritization process to facilitate agencies’ ability to address time and resource barriers and allow them to target their efforts at reviews of regulations that are more likely to need modifications, (2) pre- planning for regulatory reviews to aid agencies in identifying the data and analysis methodology that they will need to conduct effective reviews, and (3) utilizing independent parties to conduct the reviews to enhance the review’s credibility and effectiveness, among other things. While there was general consensus among federal and nonfederal parties on the major facilitators and barriers, there were a few clear differences of opinions between them regarding public participation and the extent to which reviews should be conducted by independent parties. Because only a few agencies track the costs associated with conducting their reviews, one cannot identify the type and approach to retrospective review that may be most cost effective. However, agency officials told us that the reviews have resulted in cost savings to their agencies and to regulated parties, for example by saving both the agency and the public the costs of repeatedly dealing with petitions for change or waivers in response to difficulties implementing particular regulatory provisions. All of the agencies in our review reported that the lack of time and resources are the most critical barriers to their ability to conduct reviews. Specifically, they said that it is difficult to devote the time and staff resources required to fulfill various retrospective review requirements while carrying out other mission-critical activities. Agencies’ officials reported that, consequently, they had to limit their retrospective review activities during times when they were required to respond to other legislative priorities. For example, officials from MSHA reported that they conducted fewer reviews in 2006 because they were heavily engaged in trying to implement the Mine Improvement and New Emergency Response Act of 2006 (MINER Act), which Congress passed in response to mining accidents that occurred in 2006. Prior to these events, MSHA was engaged in soliciting a contractor to assist the agency in prioritizing its retrospective review efforts. The officials reported that, because of the need to develop regulations pursuant to the act, they stopped the process of looking for a contractor, and conducted fewer reviews. Officials from various agencies reported that retrospective reviews are the first activities cut when agencies have to reprioritize based upon budget shortfalls. A DOT official reported that, despite having high-level management support for retrospective review activities, the department has still experienced funding limitations that have affected their ability to conduct retrospective review activities. Our examination of agency documents confirmed that several agencies indicated that they did not complete all of the reviews that they planned and scheduled for some years within the scope of our review because sufficient resources were not available. In one example, we found that FAA delayed conducting any planned reviews for an extended period because, as reported in the Unified Agenda, they did not have the resources to conduct them. Many of the agencies in our review did not track the costs (usually identified in terms of full-time equivalent (FTE) staff resources) associated with their reviews; therefore, they could not quantify the costs of conducting reviews. Most agencies’ officials reported that they lack the information and data needed to conduct reviews. Officials reported that a major data barrier to conducting effective reviews is the lack of baseline data for assessing regulations that they promulgated many years ago. Because of this lack of data, agencies are unable to accurately measure the progress or true effect of those regulations. Similar data collection issues were also identified by agencies in the Eisner and Kaleta study published in 1996, which concluded that, in order to improve reviews for the future, agencies should collect data to establish a baseline for measuring whether a regulation is achieving its goal, and identify sources for obtaining data on ongoing performance. Agencies and nonfederal parties also considered PRA requirements to be a potential limiting factor in agencies’ ability to collect sufficient data to assess their regulations. For example, EPA officials reported that obtaining data was one of the biggest challenges the Office of Water faced in conducting its reviews of the effluent guideline and pretreatment standard under the Clean Water Act, and that as a result the Office of Water was hindered or unable to perform some analyses. According to the officials, while EPA has the authority to collect such data, the PRA requirements and associated information collection review approval process take more time to complete than the Office of Water’s mandated schedule for annual reviews of the effluent guideline and pretreatment standard allows. While one nonfederal party did not agree that PRA restrictions posed a significant barrier to conducting reviews, agencies and nonfederal parties generally agreed that the act was an important consideration in agency data collection. However, while agencies identified the potential limitations of PRA, it is important to recognize that PRA established standards and an approval process to ensure that agencies’ information collections minimize the federal paperwork burden on the public, among other purposes. In general, data collection appeared to be an important factor that either hindered or facilitated reviews. Some of the agencies in our review that promulgate safety regulations, such as CPSC, NHTSA, and those within DOJ, reported that having sufficient access to established sources of safety data, such as death certificates or hospital databases on deaths and injuries related to products, greatly facilitated their ability to conduct retrospective reviews of their regulations. Finally, agencies also reported facing limits on their ability to obtain data on their regulations because of the length of time it takes to see the impact of some regulations and the scarcity of data related to areas that they regulate. Nonfederal parties also cited this data limitation as a challenge to agency reviews. To make efficient use of their time and resources, various agency officials said that they consider all relevant factors, including effectiveness and burden reduction, whenever they review an existing regulation. Therefore, when reviews that have predetermined or generic schedules and review factors (such as 10-year Section 610 reviews) arise, the agency might have already reviewed and potentially modified the regulation one or more times, based upon the same factors outlined in Section 610. The officials reported that, although the subsequent predetermined reviews are often duplicative and less productive, they nevertheless expend the time and resources needed to conduct the reviews in order to comply with statutory requirements. However, they reported that these reviews were generally less useful than reviews that were prompted because of informal industry and public feedback, petitions, changes in the market or technology, and other reasons. Furthermore, agencies expressed concerns about whether predetermined schedules may conflict with other priorities. DOT acknowledged this issue even as it was establishing an agency policy to require retrospective reviews. In response to a public suggestion that DOT conduct reviews based upon a regular predetermined schedule, the agency cautioned that arbitrary schedules might mean delaying other, more important regulatory activities. As examples of predetermined reviews that may be duplicative or unproductive, officials from agencies within DOT, USDA, and DOL reported that the regulations that most often apply to their industries may need review sooner than the 10-year mark prescribed by Section 610. To be responsive to the regulated community, the agencies regularly review their regulations in response to public feedback, industry and technology changes, and petitions, among other things, and make necessary changes before a Section 610 review would be required. Our assessment of reviews listed in the Unified Agenda confirmed that agencies often noted that they had not made changes because of their Section 610 reviews, but had previously made changes to these regulations because of factors that previously emerged. For example, USDA’s AMS reported completing 11 mandated Section 610 reviews since 2001, which resulted in no regulatory changes. For 9 of these reviews, the related published Section 610 reports stated that AMS made no changes to the regulations because they were modified “numerous times” in advance of the 10-year Section 610 review to respond to changes in economic and other emerging conditions affecting the industry. Similar to agency views on timing, views by an OMB official and some nonfederal parties indicated that the period immediately after an agency promulgates a rule may be a critical time for agencies to review certain types of regulations, in part because once the regulated community invests the resources to comply with the regulations and integrates them into their operations, they are less likely to support subsequent changes to the regulation. In addition, the immediate effects of certain types of regulations, such as economic incentive regulations, may be more apparent and changes, if needed, can be brought about sooner. Nonfederal parties reported that this may be especially important during the time that regulated entities are facing challenges with the implementation of a regulation. Some of these commenters noted that such immediate reviews might be especially appropriate for rules that have a high profile, are controversial, or involve a higher degree of uncertainty than usual. Two agencies within our review that had predetermined deadlines that are set only a few years apart also reported that these schedules affected their ability to produce more useful reviews. The officials reported that they do not have enough time to effectively complete the reviews prior to beginning another review. For example, EPA and FCC both stated that agency-specific review requirements to conduct reviews of their regulations every few years make it difficult for the agencies because either the agencies do not have enough time to effectively gather data for the reviews or do not have enough time to observe new effects of the regulation between reviews. As a result, the agencies may be doing a less comprehensive job in conducting the reviews and have more difficulty in meeting their review deadlines. For requirements that specify a predetermined schedule for conducting reviews, agencies also identified, as a potential barrier, the lack of clarity on when to “start the clock” for regulations that have been amended over time. For example, as previously mentioned, in order to satisfy Section 610 requirements, DOT initiated an extensive process for reviewing its sections of the CFR every year. The agency’s officials reported that they adopted this extensive approach because they were unable to determine whether to review a regulation 10 years after its promulgation or 10 years after its last modification. Other agencies included in our review did not take this approach to meeting Section 610 requirements. Similarly, in our 1999 report on RFA, we reported that agencies’ varying interpretations of Section 610 requirements affected when they conducted reviews. While agencies’ officials reported that predetermined schedules can sometimes be ineffective, it is important to note that such schedules can also help ensure that reviews occur. Specifically, some parties have noted that a benefit of prespecifying the timing of reviews is that this provides Congress with a way to force agencies to periodically reexamine certain regulations. In general, as illustrated in table 3, our review of the timing of reviews and the evaluative factors that agencies are supposed to assess in those reviews revealed that there is considerable overlap in the various mandatory and discretionary review requirements. Various agencies identified scoping issues as a barrier to the usefulness of reviews. Agencies’ officials reported significant delays in completing reviews and making timely modifications, as well as obtaining meaningful input in reviews that involved multiple regulations as the unit of analysis. Some agencies, such as DOL’s MSHA, reported experiencing delays up to 16 years in completing a review because they scoped their review too broadly. Specifically, MSHA officials reported that, during a comprehensive review of their ventilation standards, the scope of the review increased due to input from other departmental agencies. Because of this input and the complexity of the rule itself, it took 16 years to complete the modifications, resulting in a major rewrite of the ventilation standards. In our assessment of this review, the resulting information was not as timely as it otherwise could have been, and therefore may have been less useful. Similarly, officials from other agencies reported that scoping reviews too broadly also affected their ability to conduct expedient reviews. Agencies’ officials suggested that having a narrow and focused unit of analysis, such as a specific standard or regulation, is a more effective approach to conducting reviews. Specifically, officials from DOT and FDIC reported that, when they conducted narrowly defined reviews, the public provided more meaningful input on their regulations. Furthermore, one nonfederal party emphasized that, when agencies choose to analyze a broad unit of analysis, such as an act, it is difficult for the public to discern which regulations are doing well and which are not. The positive effects of one regulation under the legislation can overshadow the negative effects of other regulations. Therefore, the performance assessment of the relevant regulations is less transparent and, consequently, less useful. Agencies’ officials reported that statutory requirements are a major barrier to modifying or eliminating regulations in response to retrospective regulatory reviews because some regulations are aligned so closely with specific statutory provisions. Therefore, the agencies may be constrained in the extent to which they can modify such regulations without legislative action. For example, officials from MSHA, FDIC, and SBA reported that many of their regulations mirror their underlying statutes and cannot be modified without statutory changes. During its retrospective reviews to reduce burden, FDIC along with other banking agencies within the FFIEC, identified 180 financial regulations that would require legislative action to revise. Similarly, in our 1999 report on regulatory burden, we found that agencies often had no discretion, because of statutory provisions, when they imposed requirements that businesses reported as most burdensome. One approach taken by FDIC to address this issue was to identify regulations that required legislative action in their review process and to coordinate with Congress to address these potential regulatory changes. Because of this approach, Congress is actively involved in FDIC’s regulatory burden relief efforts and has passed changes in legislation to provide various forms of burden relief to the financial sector. Agencies and nonfederal parties identified the lack of public participation in the review process as a barrier to the usefulness of reviews. Agencies stated that despite extensive outreach efforts to solicit public input, they receive very little participation from the public in the review process, which hinders the quality of the reviews. Almost all of the agencies in our review reported actively soliciting public input into their formal and informal review processes. They reported using public forums, and industry meetings, among other things for soliciting input into their discretionary reviews, and primarily using the Federal Register and Unified Agenda for soliciting public input for their mandatory reviews. For example, USDA officials reported conducting referenda of growers to establish or amend AMS marketing orders, and CPSC officials reported regularly meeting with standard-setting consensus bodies, consumer groups, and regulated entities to obtain feedback on their regulations. Other agencies reported holding regular conferences, a forum, or other public meetings. However, most agencies reported primarily using the Unified Agenda and Federal Register to solicit public comments on mandatory reviews, such as Section 610 reviews. Despite these efforts, agency officials reported receiving very little public input on their mandatory reviews. Nonfederal parties we interviewed were also concerned about the lack of public participation in the retrospective review process and its impact on the quality of agency data used in reviews. However, these nonfederal parties questioned the adequacy and effectiveness of agencies’ outreach efforts. Specifically, 7 of the 11 nonfederal parties cautioned that the Federal Register and Unified Agenda are not sufficiently effective tools for informing the public about agency retrospective review activities. In addition, most of the nonfederal parties we interviewed were unaware of the extent to which agencies conducted reviews under their own discretion, and most of those parties reported that they were not aware of the outreach efforts agencies are making to obtain input for these reviews. Limited public participation in some review activities was cited by both agencies and nonfederal parties as a barrier to producing quality reviews, in part because agencies need the public to provide information on the regulations’ effects. Both agency officials and nonfederal parties identified methods for improving communication, including using agency Web sites, e-mail listserves, or other Web-based technologies (such as Web forums), among other things. Nonfederal parties identified the lack of transparency in agency review processes, results, and related follow-up activities as a barrier to the usefulness of reviews to the public. Nonfederal parties were rarely aware of the retrospective review activities reported to us by the agencies in our review. Similarly, in our review of the Federal Register and Unified Agenda, we were not always able to track retrospective review activities, identify the outcome of the review, or link review results to subsequent follow-up activities, including initiation of rulemaking to modify the rule. As stated earlier, some mandatory reviews do not require public reporting and many agencies did not consistently report the results of their discretionary reviews, especially if the reviews resulted in no changes to regulations. Some nonfederal parties told us that lack of transparency was the primary reason for the lack of public participation in agencies’ review processes. Agencies and nonfederal parties identified pre-planning for regulatory reviews as a practice that aids agencies in identifying the data and analysis methodology that they need to conduct effective outcome-based performance reviews. Some agencies within our review planned how they would collect performance data on their regulations before or during the promulgation of the relevant regulations or prior to the review. They cited this technique as a method for reducing data collection barriers. For example, DOT’s NHTSA was an agency that OMB officials and nonfederal parties identified as appearing to conduct effective retrospective reviews of its regulations. NHTSA officials reported to us that, to conduct effective reviews, they plan for how they will review their regulations even before they issue them. Prior research on regulatory reviews also cited the need for agencies to set a baseline for their data analysis, in order to conduct effective reviews. In addition, we have long advocated that agencies take an active approach to measuring the performance of agency activities. Furthermore, we observed that pre-planning for data collection could address some challenges that agencies reported facing with PRA data collection requirements, such as the length of time required to obtain approval. Agencies reported that prioritizing which regulations to review facilitated the conduct of and improved usefulness of their reviews. Agencies that developed review programs with detailed processes for prioritizing which regulations to review reported that this prioritization facilitated their ability to address time and resource barriers to conducting reviews and allowed them to target their efforts at more useful reviews of regulations that were likely to need modifications. As previously mentioned, DOL’s EBSA and CPSC developed detailed prioritization processes that allowed officials to identify which regulations were most in need of review and to plan for conducting those reviews. Furthermore, this process allowed CPSC to prospectively budget for its reviews and to identify the number of substantive reviews per year that the agency could effectively conduct, while meeting its other agency priorities. Officials from both agencies reported that their prioritization processes allowed them to focus on the most useful retrospective review activities, which identified important regulatory changes. Nonfederal parties that we interviewed also asserted that it is not necessary or even desirable for agencies to expend their time and resources reviewing all of their regulations. Instead, they reported that it would be more efficient and valuable to both agencies and the public for agencies to conduct substantive reviews of a small number of regulations that agencies and the public identify as needing attention. Nonfederal parties and agency officials suggested that factors that agencies should consider when prioritizing their review activities could include economic impact, risk, public feedback, and length of time since the last review of the regulation, among other things. Nonfederal regulatory parties believed that reviews would be more credible and effective if the parties that conduct them were independent. For example, two different parties who we interviewed said that EPA’s first report in response to Section 812 under the Clean Air Act could have been improved by involving independent analysts. However, they recognized that it is important to include input from those who were involved in the day-to-day implementation of the regulation and were responsible for producing initial benefit-cost estimates for the regulations. Almost all of the nonfederal parties that we interviewed expressed concern that agency officials who promulgated and implemented regulations may be the same officials who are responsible for evaluating the performance of these regulations. Although the nonfederal parties acknowledged that it is important for officials with critical knowledge about the program to be involved with providing input into the review, they were concerned that officials placed in this position may not be as objective as others may be. Nonfederal parties also expressed concerns about agencies’ capacity to conduct certain types of analyses for their reviews, such as benefit-cost assessments. The nonfederal parties suggested that agencies could consider having an independent body like another agency, Inspector General, or a centralized office within the agency conduct the reviews. During our review, agencies’ officials reported that they sometimes contract out their reviews if they do not have the expertise needed to conduct the analyses. However, during a discussion of this issue at our joint agency exit meeting, agency officials pointed out the difficulty in finding a knowledgeable independent review body to conduct retrospective reviews, and they noted that even contracted reviewers may be considered less independent, because they are paid by the agency to conduct the study. Agencies and nonfederal regulatory parties agreed that high-level management support in the review process is important to the successful implementation of not only individual reviews but also to sustaining the agency’s commitment to a review program and following up on review results. As an example, officials from FDIC credited the accomplishments of their review program largely to the support of high-level managers who headed the FFIEC effort to reduce regulatory burden on financial institutions. Officials reported that the leadership of the Director of the Office of Thrift Supervision, who chaired the FFIEC effort, helped to catapult support for reviews at all of the FFIEC agencies, including FDIC, and helped to free up resources to conduct reviews at these agencies. Almost all of the selected agencies reported involving some high-level management attention in their reviews, but where and how they used this involvement varied. For example, while almost all of the agencies reported involving high-level management attention in decision-making processes that resulted from reviews, CPSC and EBSA’s review programs also involved high-level managers early in their processes, in order to determine which regulations to review. Overall, agencies and nonfederal parties indicated that having high-level management attention is important to obtaining and sustaining the resources needed to conduct reviews and the credibility of agency reviews. According to agency officials from DOT, DOL, SBA, and FDIC, they learned that grouping related regulations together when conducting reviews is a technique that more often generated meaningful comments and suggestions from the public. For example, officials from FDIC stated that categorizing regulations for review and soliciting input over an extended time period proved to be a more effective way of receiving public input. They reported that placing regulations into smaller groups and soliciting feedback on these categories separately over a 3-year period helped the members of the FFIEC to avoid overwhelming the public with the regulatory reviews, and allowed the agencies to receive more thoughtful participation and input. SBA officials reported reviewing related regulations together because a change to one rule can have an impact on the related rules. Similarly, a DOT official reported that grouping similar regulations together to solicit public input was an effective technique for FAA because the agency regulates a broad policy area. FAA received 1800 suggestions for regulatory changes based upon one such review. However, the official cautioned that while grouping regulations is an effective technique to obtaining useful public input, defining the categories too broadly can lead to an effort that is too intensive. In addition, the practice may be less convenient and practical for agencies that write very specific standards, such as NHTSA. For these agencies it may be more effective to pick related characteristics of rules in order to group regulations to review. Nonfederal parties suggested that agencies need to be more aware of the different audiences that might be interested in their reviews, and target the level of detail and type of product used to report the results to meet the needs of these various audiences. For example, a product that focuses on specific details of implementing a regulation may be less useful to those interested in the policy effects of a regulation, and vice versa. Further, both agency officials and nonfederal parties identified methods for improving communication, including better use of information technology tools, such as agency Web sites, electronic docket systems, e-mail listserves, Web-based forums, or other Web-based technologies. Agencies have not estimated all of the costs and benefits associated with conducting retrospective reviews, but they believe that retrospective reviews have resulted in cost savings to their agencies. For example, MSHA officials reported that their retrospective regulatory reviews related to petitions for modification produce savings for the agency because the reviews prompt the agency to review and modify regulations that are heavily petitioned, which reduces costs associated with reviewing similar petitions. They reported that these reviews also save the mining industry from the costs associated with repeatedly filing petitions. In addition to petition-related cost savings, agencies could save costs by reviewing and eliminating regulations that are no longer useful. Therefore, agencies could reduce costs associated with implementing and enforcing outdated or unproductive regulations. We found that only a few agencies track the costs associated with conducting their reviews, so we were unable to identify which methods are most cost effective. Some agency officials, such as those in MSHA, reported that tracking direct costs associated with reviews is difficult because reviews are conducted as part of the normal operation of the agencies and in concert with other actions to fulfill the agencies’ missions. However, some agencies like CPSC establish budgets for their reviews, and track the associated costs. As a result, CPSC determined that conducting about four regulatory reviews per year was a reasonable effort for the associated expense to the agency. OSHA also tracks the costs associated with its reviews. The agency’s officials told us that each of its reviews typically requires 2/3 of a program analyst FTE in the Office of Evaluations and Audit Analysis, about 1/5 of an attorney FTE in the Office of the Solicitor, 1/2 FTE for the involvement of staff from other directorates, and approximately $75,000 to $100,000 of contractor support per review. Although agencies did not always track the cost of their reviews, officials reported that they know some reviews are not cost effective. For example, a USDA official reported that, by nature, some regulations are set up by the agency to be reviewed regularly. Therefore, externally imposed reviews only duplicate this effort. An example of such reviews would be those conducted for regulations that are consistently reviewed by industry committees that are appointed by the Secretary of an agency. AMS officials reported that industry committees appointed by the Secretary of Agriculture oversee many of the agency’s regulations and, as one of their main functions, regularly review AMS regulations to identify needed changes. Therefore, regulations under the purview of these committees are already constantly being reviewed and updated, and thus may benefit less from a Section 610 review than other regulations. Our review revealed that agencies are conducting more reviews, and a greater variety of reviews, than is readily apparent, especially to the public. To facilitate their reviews, agencies, to greater and lesser extents, have been developing written procedures, processes, and standards to guide how they select which rules to review, conduct analyses of those rules, and report the results. Given the multiple purposes and uses of reviews, we recognize that there is no “one size fits all” approach. However, there are lessons to be learned from ongoing regulatory reviews that could benefit both the agencies in our scope and others that conduct retrospective regulatory reviews. Because agencies are attempting to find ways to further develop and improve their retrospective review processes (for example, establishing units that focus on retrospective reviews and seeking assistance with establishing prioritization systems), identifying ways to share promising practices could collectively improve agency review activities. Feedback from agency officials and nonfederal parties, as well as our own analysis, indicate that there are procedures and practices that may be particularly helpful for improving the effectiveness and transparency of retrospective review processes. For example, agencies can be better prepared to undertake reviews if they have identified what data will be needed to assess the effectiveness of a rule before they start a review and, indeed, before they promulgate the rule. If agencies fail to plan for how they will measure the performance of their regulations, and what data they will need to do so, they may continue to be limited in their ability to assess the effects of their regulations. Given increasing budgetary constraints, both agency officials and nonfederal parties emphasized the need to better prioritize agency review activities, when possible, to more effectively use their limited resources. Agency officials and nonfederal parties recognize that time and resources are too limited to allow for a regular, systematic review of all of their regulations, and that devoting excessive time and scarce resources to a formal review of all of their regulations could result in insufficient attention to other regulatory needs or statutory mandates. As we have observed, some agencies are already using such prioritization processes. Without a detailed prioritization system, agencies may not be able to effectively target their reviews so that they devote resources to conducting substantive and useful reviews of the regulations that need the most attention. Agencies and nonfederal parties also reported that reviews are more credible and useful to all parties if agencies have assessed multiple review factors in their analyses of the regulations, rather than relying on a single factor, such as public comments. The failure of agencies to do this could result in reviews that miss assessing crucial information that could provide context to the results of the analysis, such as weighing the benefits against the burdens of the regulation. Further, our assessment of the strengths and limitations of agency reviews revealed that agencies could improve their efforts to employ a standards-based approach to conducting discretionary reviews. Agencies are inconsistently applying a standards-based approach to conducting discretionary reviews. Applying a standards-based approach could enhance the transparency and consistency of reviews. Agencies’ reporting of reviews appears largely ineffective. None of the nonfederal parties we contacted were aware of the extent of agency retrospective review activities. This lack of awareness might be attributable to two reasons. First, agencies typically did not report results for discretionary reviews, which account for most of agencies’ review activities. Therefore, the public cannot be expected to know about these reviews. Second, when agencies do report on their activities, the mode and content of these communications may not be effective. For example, although we found that some agencies used multiple modes of communication, for the most part agencies reported that they rely heavily on the Federal Register. However, nonfederal parties indicated that reliance on the Federal Register is not sufficient. Further, the content that agencies do publish does not always provide adequate information about the analysis and results of the reviews. Our own assessment showed that it was sometimes difficult to determine the outcomes of the reviews or the bases for the agencies’ conclusions. Some agencies have employed multiple communication modes and provided detailed content in their reports, but still report disappointing levels of public participation. Therefore, it is clear that agencies need to continue to explore methods to more effectively communicate and document information about their reviews and the underlying analyses. According to agency officials and nonfederal parties, such methods could include using agency Web sites, e-mail listserves, or other Web-based technologies (such as Web forums). When agencies do not effectively communicate the analysis and results of their reviews, they miss the opportunity to obtain meaningful comments that could affect the outcome of their reviews. Further, without showing the underlying analysis of reviews, the agencies’ conclusions may lack credibility. Agencies and nonfederal parties also emphasized the importance of having high-level support for sustaining agency retrospective review activities, and increasing their credibility with the public. Without such attention, agencies will face difficulties in making retrospective review a priority that receives the resources necessary for conducting successful reviews. Agencies provided specific examples that illustrated how high-level management support helped to ensure that they followed through on the results of regulatory reviews. Although agency officials cautioned that even high-level management support might not be sufficient to overcome all budgetary constraints, having such support may ensure that some retrospective review activity will be sustained. One of the most striking findings during our review was the disparity in the perceived usefulness of mandatory versus discretionary regulatory reviews. The agencies characterized the results of their discretionary reviews as more productive and more likely to generate further action. A primary reason for this appears to be that discretionary reviews that address changes in technology, advances in science, informal agency feedback, harmonization efforts, and petitions, among other things, may be more closely attuned to addressing issues as they emerge. While agencies’ officials reported that their discretionary reviews might be more useful than the mandatory reviews, we can not definitively conclude which reviews are most valuable. We did not assess the content and quality of discretionary reviews, and could not have done so because they often were not documented. Although the officials reported that the bulk of their review activity is associated with discretionary reviews, they could not provide evidence to show definitively that this was so or that discretionary reviews more often generated useful outcomes. Further, one cannot dismiss the value that Congress anticipated when establishing the mandatory requirements for agencies to conduct reviews for particular purposes and on particular schedules. The predetermined time frames of mandatory reviews can both help and hinder. On one hand, predetermined schedules are one means by which Congress can force agencies to periodically reexamine certain regulations. However, the timing for some mandatory reviews may either be too short or overlap with other review requirements, making it more difficult for agencies to produce meaningful analysis from their reviews. Conversely, from the cursory information that agencies reported for some mandatory reviews that have review periods as long as 10 years, it appears that agencies may devote limited time and resources to conducting these reviews, perhaps partly because the required timelines do not recognize ongoing changes to regulations. Further, the criteria used in mandatory and discretionary reviews may be duplicative. In general, our review of the timing of reviews and the evaluative factors that agencies are supposed to assess in those reviews revealed that there is considerable overlap in the various mandatory and discretionary review requirements. To make efficient use of their time and resources, agency officials said that they consider all relevant factors, including effectiveness and burden reduction, whenever they review an existing regulation. Therefore, when there are duplicative review factors (such as assessing whether the rule is still needed, overly burdensome, or overlaps with other regulations), the agency might have already reviewed and potentially modified the regulation one or more times based upon the same factors. The officials reported that, although the subsequent reviews are often duplicative and less productive, they nevertheless expend the time and resources needed to conduct the reviews in order to comply with statutory requirements. Given the long-term fiscal imbalance facing the United States and other significant trends and challenges, Congress and the executive branch need to carefully consider how agencies use existing resources. In particular, overlapping or duplicative reviews may strain limited agency resources. As agencies face trade-offs in allocating these limited resources to conducting mandatory and discretionary reviews, as well as conducting other mission- critical activities, they have to make decisions about what activities will produce the most benefit. In some cases, we observed that agencies like FAA delayed conducting any planned reviews for an extended period because they reported that they did not have the resources to conduct them. Given the trade-offs that agencies face, it makes sense to consider the appropriate mix of mandatory and discretionary reviews, and other mission-critical activities, that agencies can and should conduct. More specifically, our findings and analysis suggest that it may be useful to revisit the scope and timing of some review requirements to see whether there are opportunities to consolidate multiple requirements to enhance their usefulness and make them more cost effective and easier to implement. If the current state of review requirements remains unchanged, agencies may continue to expend their limited time and resources on conducting pro forma reviews that appear to produce less useful results. Further, agencies may also continue to produce less useful results for reviews that they rush to complete, as identified by EPA and FCC officials who reported that their annual and/or biannual review requirements do not provide enough time for them to most effectively complete their reviews and/or observe new changes before starting a subsequent review. While we believe that employing the lessons learned by agencies may improve the effectiveness of their retrospective reviews, we acknowledge that the review of regulations is only one of the tools that agencies will need to fully understand the implications of their regulatory activities. In order to fully assess the performance of regulatory activities, agencies will need to consider the performance of the programs that implement their regulations and the statutes that underlie the regulations. Considering any of these elements in isolation will provide an incomplete picture of the impact of regulations on the public. However, neglecting any of these elements will have the same effect. In order to ensure that agencies conduct effective and transparent reviews, we recommend that both the Director of the Office of Management and Budget, through the Administrator of the Office of Information and Regulatory Affairs, and the Chief Counsel for Advocacy take the following seven actions. Specifically, we recommend that they develop guidance to regulatory agencies to consider or incorporate into their policies, procedures, or agency guidance documents that govern regulatory review activities the following elements, where appropriate: 1. Consideration, during the promulgation of certain new rules, of whether and how they will measure the performance of the regulation, including how and when they will collect, analyze, and report the data needed to conduct a retrospective review. Such rules may include significant rules, regulations that the agencies know will be subject to mandatory review requirements, and any other regulations for which the agency believes retrospective reviews may be appropriate. 2. Prioritization of review activities based upon defined selection criteria. These criteria could take into account factors such as the impact of the rule; the length of time since its last review; whether changes to technology, science, or the market have affected the rule; and whether the agency has received substantial feedback regarding improvements to the rule, among other factors relevant to the particular mission of the agency. 3. Specific review factors to be applied to the conduct of agencies’ analyses that include, but are not limited to, public input to regulatory review decisions. 4. Minimum standards for documenting and reporting all completed review results. For reviews that included analysis, these minimal standards should include making the analysis publicly available. 5. Mechanisms to assess their current means of communicating review results to the public and identify steps that could improve this communication. Such steps could include considering whether the agency could make better use of its agency Web site to communicate reviews and results, establishing an e-mail listserve that alerts interested parties about regulatory reviews and their results, or using other Web-based technologies (such as Web forums) to solicit input from stakeholders across the country. 6. Steps to promote sustained management attention and support to help ensure progress in institutionalizing agency regulatory review initiatives. We further recommend that, in light of overlapping and duplicative review factors in statutorily mandated reviews and the difficulties identified by agencies in their ability to conduct useful reviews with predetermined time frames, the Administrator of OIRA and Chief Counsel for Advocacy take the following step. 7. Work with regulatory agencies to identify opportunities for Congress to revise the timing and scope of existing regulatory review requirements and/or consolidate existing requirements. In order to facilitate agencies’ conduct of effective and transparent reviews, while maximizing their limited time and resources, Congress may wish to consider authorizing a pilot program with selected agencies that would allow the agencies to satisfy various retrospective review requirements with similar review factors that apply to the same regulations by conducting one review that is reported to all of the appropriate relevant parties and oversight bodies. We provided a draft of this report to the Secretary of Agriculture, the Attorney General, the Secretary of Labor, the Secretary of Transportation, the Administrator of EPA, the Administrator of SBA, the Acting Chairman of CPSC, the Chairman of FCC, the Chairman of FDIC, the Director of OMB, and the Chief Counsel for Advocacy for their review and comment. We received formal comments from the SBA Office of Advocacy; they concurred with the recommendations and, as an attachment, provided a copy of draft guidance that they developed in response to our recommendations (see app. XII). The Office of Advocacy also suggested that it would be more appropriate to direct the recommendations to the Chief Counsel of Advocacy rather than the Administrator of SBA. Because the Chief Counsel of Advocacy is the official who would need to act upon these recommendations, we made the change. OMB told us that they reviewed our draft report and had no comments. All other agencies provided technical and editorial comments, which we incorporated as appropriate. In its technical comments, DOT suggested that we expand the recommendation for agencies to identify opportunities for Congress to examine the timing and scope of existing requirements and/or consolidate existing requirements, to include executive agency mandated reviews. However, the focus of the recommendation is on statutory requirements because they tended to have recurring and/or predetermined review schedules. Therefore, we did not expand the recommendation. As we agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. We will then send copies of this report to interested congressional committees, the Secretary of Agriculture, the Attorney General, the Secretary of Labor, the Secretary of Transportation, the Administrator of EPA, the Administrator of SBA, the Acting Chairman of CPSC, the Chairman of FCC, the Chairman of FDIC, the Director of OMB, the Administrator of OIRA, and the Chief Counsel for Advocacy. Copies of this report will also be available at no charge on our Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-6806 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix XIII. To provide insights concerning how agencies assess existing regulations, congressional requesters asked us to examine agencies’ implementation of retrospective regulatory reviews and the results of such reviews. Accordingly, for selected agencies, we are reporting on: 1. the magnitude of retrospective review activity and type of retrospective reviews agencies completed from calendar year 2001 through 2006, including the frequency, impetus (mandatory or discretionary), and purposes of the reviews; 2. the processes and standards that guide agencies’ planning, conduct, and reporting on reviews, and the strengths and limitations of the various review processes and requirements; 3. the outcomes of reviews, including the perceived usefulness of the reviews and how they affected subsequent regulatory activities; and 4. the factors that appear to help or impede agencies in conducting or using retrospective reviews, including which methods, if any, that agencies and we identified as most cost-effective for conducting reviews. For purposes of this report, we generally use the term retrospective reviews to mean any assessment of an existing regulation, primarily for purposes of determining whether (1) the expected outcomes of the regulation have been achieved; (2) the agency should retain, amend, or rescind the regulation; and/or (3) the actual benefits and costs of the implemented regulation correspond with estimates prepared at the time the regulation was issued. We defined mandatory reviews as retrospective reviews that agencies conducted in response to requirements in statutes, executive orders, or executive branch directives. We defined discretionary reviews as reviews that agencies undertook based upon their own initiative. For calendar years 2001 through 2006, we assessed the retrospective review activities of nine agencies and their relevant subagencies. The nine agencies included the Departments of Agriculture, Justice, Labor, and Transportation; Consumer Product Safety Commission (CPSC); Environmental Protection Agency (EPA); Federal Communications Commission (FCC); Federal Deposit Insurance Corporation (FDIC); and the Small Business Administration(SBA). The subagencies covered in detail by our review included USDA’s Animal and Plant Health Inspection Service, Agricultural Marketing Service, and Food Safety and Inspection Service; Department of Justice’s Bureau of Alcohol, Tobacco, Firearms, and Explosives; Department of Labor’s Employee Benefits Security Administration, Occupational Safety and Health Administration, Mine Safety and Health Administration, and Employment and Training Administration; and the Department of Transportation’s Federal Aviation Administration and National Highway Traffic Safety Administration. We selected these agencies because they include Cabinet departments, independent agencies, and independent regulatory agencies covering a wide variety of regulatory activities in areas such as health, safety, environmental, financial, and economic regulation. Further, we selected these agencies because they were actively conducting regulatory reviews or were responsible for responding to multiple review requirements. We were not able to assess the activities of all regulatory agencies, due to time and resource constraints, but given the diversity and volume of federal regulation conducted by the nine selected agencies, we believe that the results of our assessment should provide a reasonable characterization of the variety of retrospective regulatory reviews and the issues associated with their implementation. GAO’s Federal Rules Database, which is used to compile information on all final rules, showed that the nine agencies accounted for almost 60 percent of all final regulations published from 2001 through 2006. However, the volume and distribution of reviews covered in this report are not generalizable to all regulatory reviews governmentwide. To supplement our assessment of these agencies’ activities, we also solicited the perspectives of regulatory oversight entities and nonfederal parties knowledgeable about regulatory issues, such as the Office of Information and Regulatory Affairs within the Office of Management and Budget, the Office of Advocacy within SBA, and 11 nonfederal parties that represented a variety of sectors (academia, business, public advocacy, and state government). To address our first objective, we interviewed and obtained documentation from agency officials as well as other knowledgeable regulatory parties on agency retrospective reviews. We administered and collected responses to a structured data collection instrument that solicited information on agencies’ retrospective review activities and lessons learned. We supplemented this data collection by obtaining information from the Federal Register, Unified Agenda, and published dockets and agency reports. We used information obtained to describe the “types” of reviews that agencies conducted—in terms of impetus (mandatory or discretionary) and purpose (for example, burden reduction or effectiveness). We compared agency review activities in terms of impetus and purpose because important differences can be seen in the processes used, outcomes derived, and lessons learned, based upon these characteristics, which we further analyze in objectives two through four. Although we note that reviews can be described and compared using other characteristics, such as policy area assessed in the review (such as health, safety, or economic) or type of analyses conducted (such as economic benefit-cost analysis, other quantitative, and qualitative), we believe our selection of characteristics in describing the types of reviews conducted was most useful and relevant for addressing our objectives. To address our second objective, we interviewed and obtained documentation from agency officials as well as other knowledgeable regulatory parties on agency retrospective reviews. We collected responses to the aforementioned structured data collection instrument that solicited information on agencies’ retrospective review activities and lessons learned. We supplemented this data collection by obtaining information from the Federal Register, Unified Agenda, and published dockets and agency reports. We also reviewed agency policies, executive orders, and statutory requirements to identify policies and procedures that guide the planning, conduct, and reporting of agencies’ reviews. Further, to identify the strengths and limitations of agency review processes, we assessed agencies’ use of three review and economic practices and standards that are important to the effectiveness and transparency of agency reviews, including the (1) use of a standards-based approach, (2) incorporation of public involvement, and (3) documentation of review processes and results. In prior work, we identified some overall strengths or benefits associated with regulatory process initiatives, including: increasing expectations regarding the analytical support for proposed rules, encouraging and facilitating greater public participation in rulemaking, and improving the transparency of the rulemaking process. Because these strengths or benefits are also relevant and useful for assessing agency retrospective review initiatives, we considered them in our selection of assessment criteria for this review. Other practices that could improve the effectiveness and transparency of reviews may exist and could be considered when developing retrospective review processes. However, we believe that the three practices that we assessed are among the most important. While we did not assess whether agencies employed these practices all the time, to the extent possible we did seek documentation and evidence that they were applied. Further, while we assessed whether agencies employed standards-based approaches in their retrospective review processes—within the scope of our review—we did not attempt to assess the quality of such standards. We compared the strengths and limitations of review processes across agencies, types of reviews, and phases of the review process. In our more detailed assessment of a limited sample of retrospective reviews completed between 2001 and 2006, we also evaluated the use of research and economic practices and standards. The sample that we assessed was too small to generalize to all agency retrospective reviews, but this assessment illustrated some of the strengths and limitations that exist in the agencies we reviewed. To address the third objective, we interviewed and obtained documentation from agency officials and collected responses on the usefulness of various types of retrospective reviews using the structured data collection instrument identified in objective one. To obtain the perspectives of nonfederal parties on the usefulness of agency reviews, we identified and interviewed 11 parties that represent a variety of sectors (academic, business, public, advocacy, and state government) and points of view. The parties were selected based on their contributions to prior GAO work on regulatory issues and our assessment of their recent publications on regulatory issues. The opinions expressed by agency officials and these nonfederal parties may be subjective and may not capture the views of all regulatory agencies, experts, and stakeholders on the usefulness of reviews. However, we believe that our selection represents a reasonable range of knowledgeable perspectives on retrospective reviews. We supplemented our data collection on the outcomes of agency reviews by reviewing the Federal Register, Unified Agenda, and published dockets and reports. For mandatory and discretionary reviews, we identified the reported results of reviews, including whether the review prompted any change to existing regulations. We also synthesized and described the usefulness of different types of reviews, as determined by agency officials and nonfederal parties knowledgeable about regulatory issues. To address the fourth objective, we interviewed and obtained documentation from agency officials, collected responses to a structured data collection instrument, and reviewed existing research on agency regulatory review initiatives. Further, we solicited perspectives of the selected oversight and nonfederal parties on the facilitating factors and barriers to the usefulness of agency reviews. Based on our analysis of agency responses and documentation, we described the lessons learned from the different agencies, and the views of oversight and nonfederal parties on facilitating and impeding practices. To supplement the lessons identified and to identify the most prevalent and/or critical facilitators or barriers for the conduct and usefulness of reviews, as well as options to overcome any barriers identified, we hosted a joint agency exit conference. During this joint exit conference, we discussed the collective responses of agencies and nonfederal parties, and similarities and differences in experiences and views. We had general consensus among federal agencies on the points discussed during this exit conference and report on areas where there was not consensus in agency and nonfederal parties’ views. We conducted our work from May 2006 through April 2007 in accordance with generally accepted government auditing standards. The three Department of Agriculture (USDA) agencies examined in this study actively reviewed their existing regulations under both mandatory and discretionary authorities. The Animal and Plant Health Inspection Service (APHIS), the Agricultural Marketing Service (AMS), and the Food Safety and Inspection Service (FSIS) conducted reviews to reduce burden on small entities under Section 610. USDA conducted discretionary reviews to respond to industry petitions or informal feedback, to meet recommendations from committees, to address new risks in regulated environments, or to update rules due to advances in technology or scientific knowledge. The agencies use both centralized and decentralized review processes that rely on the input of outside parties to inform their reviews. The three USDA agencies examined in this study actively reviewed their existing regulations under both mandatory and discretionary authorities, with reviews conducted at their own discretion more common than mandated reviews. For example, during the 2001 through 2006 period covered in our review, APHIS reported conducting 18 Section 610 reviews and completing rulemakings for 9, with 8 others currently in progress. APHIS also reported that since 2001, it has completed a total of 139 regulatory reviews, which resulted in 139 rule modifications across 12 broad content areas. AMS officials reported initiating 19 and completing 11 Section 610 reviews since 2001. However, AMS also reported that it has issued approximately 300 modifications to 30 regulations based on interaction with Industry Committees between fiscal years 2002 and 2006. In addition, AMS also reported that since 2001, the agency has conducted 18 independent assessments of its commodity promotion programs, as required of AMS under 7 U.S.C. § 7401. FSIS reported initiating 1 Section 610 review since 2001; however during the same time period, the agency has conducted 36 reviews of its rules as a result of industry petitions. The agencies’ officials reported that discretionary reviews more often resulted in regulatory changes. Our analysis of the December 2006 Unified Agenda confirmed that most modifications to the department’s regulations were attributed to reasons under USDA’s own discretion rather than because of mandates. Of the 132 rule changes listed in the Unified Agenda, 113 resulted from decisions made at agency discretion while 19 of those changes were the result of mandated actions. The processes employed for review varied by agency, with AMS program offices conducting reviews of their own regulations, APHIS program offices conducting reviews in concert with centralized offices within the agency, and centralized offices within the agency conducting FSIS reviews. However, all three agencies relied on the input of regulated communities to inform their processes. As an example of a centralized approach: APHIS’ technical and policy program staff work with the agency’s Policy and Program Development (PPD) unit to conduct reviews, and PPD works with the Deputy Administrators for each regulatory program to set regulatory priorities. The program staff that oversees the regulation, on the other hand, conducts AMS reviews, in-house. All three agencies reported that they rely on outside parties to inform their review process. For example, AMS reported that the agency conducts periodic referenda of regulated growers of fruit and vegetables to amend agency marketing orders and to identify programs for discontinuance. APHIS reported that its review decisions are influenced by ongoing discussions with industry, state and tribal authorities, and foreign governments regarding setting international trade standards. APHIS also reported that it has acted on recommendations made by outside reviews of its programs conducted by the National Plant Board and the National Association of State Departments of Agriculture. FSIS reported that it holds industry listening sessions and public meeting to inform its rulemaking and affect the day-to-day implementation of regulations. Figure 5 depicts USDA’s general process for regulatory review. While the Department of Justice (DOJ) is not primarily a regulatory agency, during the 2001 through 2006 period covered in our review, DOJ component agencies have conducted reviews of their existing regulations under both mandatory review requirements and under their own discretionary authorities. Most DOJ reviews were discretionary and in response to such drivers as changes in technology or feedback back from regulated entities, among other factors. The three mandatory reviews conducted by DOJ since 2001 were driven by separate statutory requirements to review regulations or set enforceable standards for others to follow. While DOJ has few formal processes or standards to guide the planning, conduct, and reporting of its internally conducted discretionary reviews, in the conduct of the one Section 610 review conducted by DOJ and evaluated by GAO, statutory standards were followed. DOJ is not primarily a regulatory agency and officials reported that most of its primary activities, including antiterrorism, investigation, and law enforcement do not involve the department’s regulatory process. Officials reported that few of the department’s regulations are subject to Section 610 review, and one official reported that regulatory review, as a whole, is not a major priority within the agency, compared to its other functions. However, since 2001 DOJ agencies reported completing at least 13 reviews of existing regulations. Based on published documents in the Federal Register or Unified Agenda, 10 of these reviews were conducted under DOJ’s own discretion, while 3 reviews were in response to mandatory review requirements or to comply with statutory requirements to revise regulations. The drivers for the discretionary reviews conducted by DOJ included responding to changes in technology or feedback from regulated entities, among other factors. For example, FBI officials reported that the Bureau has reviewed and is revising a rule preventing the FBI from retaining or exchanging the fingerprints and criminal history record information of nonserious offenses in the FBI’s Fingerprint Identification Records System. According to the proposed rule change resulting from this review, the existing regulations were originally implemented in 1974 and based on the data-processing capabilities of a manual record-keeping environment. Officials reported that advances in information technology precipitated a review of these regulations, which once revised, will enhance the FBI’s search capability for fingerprint and criminal history background checks. DOJ also cited feedback from regulated entities as an important driver of discretionary reviews. DEA, for example, reported that the controlled substance manufacturer and distributor industries requested that DEA provide an electronic method to satisfy the legal requirements for ordering Schedule I and II controlled substances, which previously could only be ordered through a triplicate form issued by DEA. According to officials, DEA reviewed its regulations and worked with industry to develop a pilot program to update its system. After notice-and-comment rulemaking, DEA published a Final Rule revising its regulations on April 1, 2005. In addition to these reviews, ATF conducted five discretionary reviews since 2001, including a reorganization of Title 27 in the transition of ATF functions from the Department of the Treasury to DOJ after the creation of the Department of Homeland Security. Additionally, OJP conducted two discretionary reviews since 2001 and the BOP reported that it conducts annual, ongoing reviews of its Policy Statements, many of which correspond with its regulations in the CFR, to ensure that they are current. GAO was able to identify three mandatory regulatory reviews completed by DOJ since 2001, and the impetuses for these reviews varied. For example, ATF in 1997 initiated a Section 610 review evaluating the impact of changes to its fireworks storage and record-keeping requirements on small entities. This review, concluded in a January 29, 2003, Federal Register notice, certified that the revised rule will have a minimal economic impact on the explosives industry, and will no longer have a significant economic impact on a substantial number of small entities. The review also identified other areas of concern to the public, precipitating further actions. CRT conducted a review pursuant to Executive Order 12250, which requires the Attorney General to establish and implement a schedule for the review of executive branch agencies’ regulations implementing various federal nondiscrimination laws, including the Civil Rights Act of 1964, among others. According to officials, this “Cureton Review Project” included an evaluation of the regulations of 23 agencies, including DOJ, which resulted in clarified statutory language to promote consistent compliance with the various nondiscrimination statutes. In a third review, CRT published an Advanced Notice of Proposed Rulemaking (ANPRM) to update regulations implementing Title II and Title III of the Americans with Disabilities Act of 1990 (ADA), including the ADA Standards for Accessible Design. According to the ANPRM, the ADA requires DOJ to adopt accessibility standards that are ‘‘consistent with the minimum guidelines and requirements issued by the Architectural and Transportation Barriers Compliance Board,” which were revised in July 2004. DOJ has also reported that it may conduct a Regulatory Impact Analysis on the revised ADA standards, including a benefit-cost analysis pursuant to Executive Order 12866, OMB Circular A–4, and the Regulatory Flexibility Act. Department officials stated that much of DOJ’s regulatory review process was “informally” structured, and without formal procedures and standards. Professional judgment, officials stated, was used in some cases in lieu of documented practices. However, a GAO evaluation of the recent ATF Explosive Materials in the Fireworks Industry review indicates that DOJ followed the statutorily defined process for its completion. As required by Section 610, the review must describe (a) the continued need for the rule; (b) the nature of complaints or comments received concerning the rule from the public; (c) the complexity of the rule; (d) the extent to which the rule overlaps, duplicates, or conflicts with other federal rules and, to the extent feasible, with state and local governmental rules; and (e) the length of time since the rule has been evaluated or the degree to which technology, economic conditions, or other factors have changed in the area affected by the rule. GAO’s evaluation of this proceeding concluded that ATF addressed the requirements for responding to public comments, complaints, and the rule’s complexity. ATF’s analysis was primarily in response to public comments and a review of its own experience implementing the rule. In a few cases, ATF responded to comments by referencing published experts’ opinions and scientific tests. However, ATF provided no overall analysis of the cost of these storage regulations or of their effectiveness in promoting public safety, or law enforcement’s ability to trace fireworks to their manufacturer—a specific desired outcome referred to in the notice. Figure 6 depicts the general process for regulatory review in one ATF Section 610 review. During the 2001 through 2006 period covered in our review, agencies within the Department of Labor (DOL) have actively reviewed their existing regulations in response to both mandatory and discretionary drivers. Specifically, the Employee Benefits Security Administration (EBSA), Occupational Safety and Health Administration (OSHA), Mine Safety and Health Administration (MSHA), and Employment and Training Administration (ETA) have conducted various retrospective reviews of their regulations. The types of reviews—in terms of impetus and purpose—outcomes of reviews and processes used to conduct the reviews varied among the agencies. Specifically, while EBSA has established a formal and documented regulatory review program, OSHA, MSHA, and ETA have somewhat less formal review programs, but MSHA and ETA were in the process of developing more standardized processes. Furthermore, while all of the agencies reported that their discretionary reviews more often resulted in subsequent regulatory action, the outcomes of mandatory reviews varied slightly among the agencies. All of the DOL agencies within our review reported actively conducting reviews of their regulations. However, the types of reviews—in terms of impetus and purpose—and outcomes of reviews varied slightly among the agencies. All of the DOL agencies reported that they conducted ongoing reviews of their regulations, at their own discretion. However, two of the agencies—OSHA and EBSA—also incorporated requirements from mandatory reviews into these discretionary reviews. Furthermore, EBSA conducts its discretionary reviews more formally as part of its Regulatory Review Program. According to documentation that we reviewed on this program, EBSA formally conducted reviews of its existing regulations in response to specific developments and/or changes in the administration of group health, pension, or other employee benefit programs, changes in technology and industries, and legislation. EBSA also reviewed regulations in response to identified enforcement problems or the need to further the agency’s compliance assistance efforts through improved guidance. Furthermore, the review program incorporates Section 610 reviews as part of the program. While OSHA did not have a program that was as formalized and documented as EBSA, the officials reported and our review of their analyses confirmed that the agency also incorporated Section 610 criteria into broader review initiatives that the agency undertook to address informal feedback from industry, stakeholders, and staff. MSHA and ETA also reported initiating reviews in response to either stakeholder input, technology or policy updates, petitions, or internal identification of needed rule changes. However, the agencies’ officials reported that they have not conducted any Section 610 reviews (which focus on burden reduction) during the period covered in our review because they have not had any regulations within the last 10 years that had a SEISNOSE effect. Outcomes of reviews varied slightly among the agencies. While it was not possible to account for all of the reviews conducted by all of the agencies because the agencies did not document some informal reviews, collectively the agencies reported completing at least 60 reviews since 2001. According to EBSA documentation, the agency completed at least 7 of its 13 formal retrospective reviews, including 4 Section 610 reviews. All of the discretionary reviews resulted in subsequent regulatory changes, including changes to the regulation, guidance, or related materials. None of EBSA’s Section 610 reviews resulted in regulatory changes. OSHA completed 4 reviews in response to both discretionary and Section 610 requirements which resulted in regulatory changes or changes to guidance documents or related materials. According to OSHA documentation, 2 of their completed Section 610 reviews and 2 of their Standards Improvement Project Reviews recommended regulatory changes, including clarifications to standards or additional outreach or compliance assistance materials. MSHA officials reported engaging in a 2004 MSHA Strategic Initiative Review (a review of all Title 30 CFR regulations) and a review conducted according to an MSHA initiative to improve and eliminate regulations that were frequently the subject of petitions for modification. Both of these reviews resulted in changes to regulations. ETA officials reported that, in 2002, the agency conducted a regulatory cleanup initiative that resulted in updates to individual regulations and that ETA has updated individual regulations when the agency’s program offices identified a need to do so through their course of business. The agencies also reported making regulatory changes based upon departmentwide regulatory cleanup initiatives in 2002, and 2005/2006, which the department’s Office of the Assistant Secretary for Policy spearheaded. Additionally, the department completed 42 reviews in response to Office of Management and Budget (OMB) regulatory reform nominations from 2001 to 2004, which resulted in subsequent regulatory action. The development of review processes for DOL agencies ranged from processes that were documented and formal with established review structures and procedures, to informal undocumented review processes with structures and procedures that were still developing. For example, EBSA established a formal review program that established a formal structure for reviews, including identification of what resources (staff) would be involved, criteria that the agency would use to select and assess regulations, and the method for reporting results. While OSHA did not have a documented formal review program, the agency described a somewhat formal structure that it uses to conduct its reviews. Similarly, ETA officials reported that they just recently established a more formal structure for their review process, including the creation of a Regulations Unit that will coordinate the development of regulations for ETA legislative responsibilities and formalize regulatory procedures within the agency. According to the officials, the Regulations Unit will establish time frames and/or internal triggers for reviews to ensure the agency properly reviews and updates regulations. However, they noted that, given the recent establishment of this unit, it might take some time to implement these procedures. MSHA did not appear to have a documented formal review process or structure for its discretionary and mandatory reviews. However, the agency reported that it had been engaged in soliciting contractors to develop a more formal process for how to prioritize what regulations that agency would review. Figures 7 and 8 illustrate an example of the variation in the agencies’ review processes. To facilitate sharing practices, in appendix XI we provide a more detailed description of practices within EBSA’s review process, which was the most formalized and documented review process that we examined within the scope of our review. Between 2001 and 2006, Department of Transportation (DOT) agencies within the scope of our evaluation actively reviewed their existing regulations under both mandatory and discretionary authorities. The mandatory reviews conducted by DOT agencies addressed governmentwide, departmentwide, and agency-specific review requirements. DOT conducted discretionary reviews in response to formal petitions and informal feedback from the public and in response to accidents or similar events and changes in specific industries, technologies, or underlying standards. Additionally, DOT conducted reviews in response to Office of Management Budget (OMB) regulatory reform initiatives as well as a stand-alone initiative to review all rules under the department’s authority. DOT has written policies and procedures guiding the planning, conduct, and reporting of reviews. While review processes may vary somewhat within DOT agencies, overall these agencies follow DOT guidelines in the conduct of their reviews. DOT has conducted a number of initiatives to systematically review existing regulations to comply with federal mandates and DOT’s own policies and procedures for regulatory review. In order to satisfy Section 610 and other review requirements, DOT initiated a 10-year plan in 1998 to systematically review some of its sections of the Code of Federal Regulations every year, with the objective of reviewing all of its regulations over a 10-year cycle. DOT also maintains a departmentwide review requirement, instituted in 1979, to periodically review existing regulations to determine whether they continue to meet the needs for which they originally were designed or whether reviewed rules should be revised or revoked. More recently, in 2005, acting under its own discretion, DOT initiated and completed a special stand-alone regulatory review in which the department sought public comment on all rules and regulations under DOT’s authority. DOT also reviewed regulations in response to OMB initiatives in 2001, 2002, and 2004, which solicited nominations from the general public for federal regulations and guidance documents for reform. The agency completed 61 reviews in response to these reform initiatives, and the department took subsequent action on 43 of the regulations it reviewed. Overall, during the 2001 through 2006 period covered in our review, DOT has reported conducting over 400 reviews of existing regulations to meet governmentwide review requirements, including those under Executive Order 12866 on Regulatory Planning and Review, Section 610, and the Executive Memorandum of June 1, 1998, on Plain Language in Government Writing. In addition to reviews conducted under departmentwide requirements, various agencies within DOT have reviewed regulations within the specific statutes under their purview. For example, since 2001 FAA has reviewed three regulations pursuant to requirements in the Federal Aviation Reauthorization Act of 1996. According to agency officials, these reviews included post implementation cost-benefit assessments of three, high-cost FAA rules. FMCSA reported that it also reviews any regulation impacted by the Motor Carrier Act of 1980; the Motor Carrier Safety Improvement Act; and the Safe, Accountable, Flexible, Efficient Transportation Equity Act: A Legacy for Users (SAFETEA-LU). Although not within the time frame for this study, FTA announced in the December 2006 Unified Agenda that it will undertake a review of its regulations to bring them into conformity with the SAFETEA-LU statute. In addition to these more formal regulatory review efforts, DOT officials reported that the department also reviews its existing regulations at its own discretion as a function of its daily, ongoing activities. According to officials, such reviews are often the result of petitions from or consultations with parties affected by DOT regulations or based on the experience of agency staff members in light of changes in specific industries, technologies, or underlying standards. DOT officials said that, for some of their agencies, reviewing petitions for rulemaking or regulatory waivers is the most productive way to obtain public input on a review of that rule. An evaluation of NHTSA’s entries in DOT’s December 2005 Unified Agenda indicated 10 rule change proceedings in various stages of completion which were the result of petitions from regulatory stakeholders. NHTSA also reported that, since 2001, it has conducted 17 reviews of Federal Motor Vehicle Safety Standards (FMVSS), including a few studies evaluating the benefits and costs of various standards. PHMSA reported that the granting of numerous waivers of a regulation is a particular signal that new technology or conditions may render that regulation obsolete or in need of amendment. DOT has written policies and procedures guiding the planning, conduct, and reporting of reviews. While the processes employed by DOT agencies may vary somewhat, overall these agencies follow DOT guidelines in the conduct of their reviews. For example, DOT’s Policies and Procedures provide guidance for prioritizing regulations for review, including the extent of complaints or suggestions received from the public; the degree to which technology or economic factors have changed; and the length of time since the regulations were last reviewed, among other factors. DOT’s procedures also provide agencies with discretion in applying the procedures. For example, NHTSA reported that it gives highest priority to the regulations with the highest costs, potential benefits, and public interest, while PHMSA reported that it gives highest priority to initiatives it deems most likely to reduce risk and improve safety. Additionally, while DOT officials reported that DOT considers OMB Circular A-4 on “Regulatory Analysis” as a guide for cost/benefit analysis of regulatory outcomes, FAA reported that it uses a set of flexible procedures recommended by an outside consultant to conduct ex post evaluations of some rules. With regard to public participation in the review process, Appendix D to the department’s Unified Agenda announces the complete schedule for all reviews, requests public comments for reviews in progress, and reports the results of completed reviews and their results. DOT agencies also pointed out that they regularly interact with stakeholders, such as regulated industries, consumers, and other interested parties to obtain feedback on regulations. For example, FAA officials stated that the agency holds conferences with industry and consumer groups to identify regulatory issues for review. In terms of the reporting of review results, DOT publishes brief summaries of completed reviews in Appendix D of its Unified Agenda. However, agencies may report review results in other ways. For example, FMCSA documents the results of its Section 610 reviews in an annual internal report, while NHTSA publishes the technical reports of its reviews in the Federal Register, requesting public comments on its determinations. Figure 9 depicts DOT’s general process for regulatory review. Since 2001, the Consumer Product Safety Commission (CPSC or the Commission) systematically reviewed its regulations under its own discretion, but has not conducted any mandatory reviews because none of its rules triggered Section 610 or other mandatory review requirements. Moreover, agency officials noted that because of its reliance on voluntary consensus standards, the agency does not promulgate as many rules as other regulatory agencies. However, the primary purpose of CPSC discretionary reviews is to assess whether the regulations that CPSC promulgates remain consistent with the objectives of the Commission. In performing its reviews, CPSC has created systematic processes for the planning, conduct, and reporting of its reviews. Through this process, the Commission prospectively budgets for its reviews. Because CPSC’s review program is so new, the agency has not completed most of the reviews that it has initiated, but the Commission has proposed changes to at least two existing regulations. In addition, the officials reported that their review program has been useful to the Commission. CPSC actively conducted reviews of its existing regulations under its own discretion. Specifically, the Commission implemented a pilot review program in 2004, with annual follow-up efforts in 2005 and 2006, which resulted in the initiation of 14 retrospective reviews. CPSC initiated this review process partly because of an Office of Management and Budget (OMB) Program Assessment Rating Tool (PART) recommendation that the agency develop a plan to systematically review its current regulations to ensure consistency among them in accomplishing program goals. The primary purpose of CPSC reviews is to assess the degree to which the regulations under review remain consistent with the Commission’s program policies and program goals. CPSC also assesses whether it can streamline regulations to minimize regulatory burdens, especially on small entities. The officials reported that their review process is so new that they have not yet fully completed it for all of the reviews that they have initiated. However, they have completed at least 3 of their 14 initiated reviews. Officials reported that, while some of the regulations they reviewed did not need a revision, they have proposed regulatory changes for two regulations, including standards for flammability of clothing textiles and surface flammability of carpets and rugs. They reported that their reviews could focus on opportunities to either expand or streamline existing regulations. Thus, their reviews could lead to increases or decreases in the scope of CPSC regulations. As examples, CPSC officials reported that during their review of their existing bicycle regulation they identified that the regulation did not reflect new technology and materials, and therefore needed to be modified and updated. Conversely, their review of their cigarette lighter rule revealed that the agency needed to promote greater compliance and more effective enforcement, which increased the agency’s regulatory oversight. Table 8 provides additional detail on the CPSC retrospective reviews. CPSC established a formal review program that prospectively budgets for the substantive reviews that the agency will conduct. Officials reported that they have conducted about four substantive reviews per year using this process, while still managing other agency priorities. The process consists of three phases, including: (1) prioritization and selection of regulations to substantively review, (2) substantive review of the selected regulations, and (3) reporting results to the Commissioners and the public, for certain reviews. As part of this process, CPSC staff prioritize which regulations need review by considering: (1) which rules have the oldest effective dates, (2) which rules were adopted under various statutes under CPSC’s authority, and (3) which rules staff identified as good candidates for change (from their experience working with the regulation). As resources allow, the agency selects one substantive regulation from each of their statutes’ areas (with the exception of the Refrigerator Safety Act), starting with their earliest regulations. As part of this prioritization process, the agency considers input from CPSC’s technical staff and outside groups. CPSC staff initiates substantive review of regulations that the Commission chooses for review. In this process, the agency solicits public comments using the Federal Register, assesses the comments received, conducts an internal technical review of the regulation, and reports the results to the Commissioners. The Commissioners make a policy decision on actions the agency will take based upon staff recommendations. If the agency decides to conduct a follow-on activity to update a rule, it subsequently notifies the public via the Federal Register. For rule reviews that result in Commission-approved projects for certain rulemaking activities (such as developing revisions to a rule for Commission consideration), CPSC makes the briefing packages available on its Web site. Other rule reviews (such as reviews for which staff suggests no action) are given to the Commissioners, but are not posted on the Web site. Figure 10 illustrates the general review process. During the 2001 through 2006 period covered in our review, program offices within Environmental Protection Agency (EPA) have conducted numerous retrospective reviews of EPA existing regulations and standards. The mix of reviews conducted by EPA—in terms of authorities—varied across the agency, but the purposes of these reviews— effectiveness, efficiency, and burden reduction—were similar across the agency. EPA’s retrospective review results provided three distinctive outcomes. While the agency conducts many reviews under its mandates, reviews conducted on its own discretion yielded more changes to existing regulations than mandated reviews. The review processes within EPA’s program offices, though different, typically shared similar elements in the planning, conduct, and reporting of results. Overall, EPA reported that its retrospective reviews have proven to be useful to the agency. The Office of Air and Radiation (OAR), the Office of Prevention, Pesticides, and Toxic Substance (OPPTS), the Office of Solid Waste and Emergency Response (OSWER), and the Office of Water within EPA each conduct mandatory retrospective reviews under their guiding statutes and focus the reviews on what is stated in statute or developed by the agency. Thus, the frequency of mandated reviews varies within EPA as well as the program offices. For instance, the frequency of reviews required by the Safe Drinking Water Act (SDWA), conducted by the Office of Water, ranges from every 3 years to every 7 years, depending on the review requirement, while the OAR is required to conduct reviews by the Clean Air Act ranging from every 3 years to every 8 years. Mandated reviews, such as those required by agency-specific statutes, mainly focused on effectiveness, while Section 610 reviews and Office of Management and Budget (OMB) Regulatory Reform Nominations were focused on burden reduction. According to EPA officials, mandatory retrospective reviews have generally resulted in limited or no changes to regulations, while reviews conducted under discretionary authority usually resulted in more changes. For instance, of the 14 Section 610 reviews conducted by the program offices since 2001, only 1 resulted in change. Moreover, OAR noted that most of its reviews validated the need for the regulation or standard. However, EPA’s review of regulations in response to OMB’s Manufacturing Regulatory Reform initiative resulted in 19 regulatory changes and 19 nonregulatory actions, including the development of regulatory guidance and reports. In addition, GAO’s review of EPA’s December 2006 Unified Agenda entries also revealed that 63 out of 64 rules identified as changed or proposed for changes were the result of decisions made under EPA’s discretionary authority. Though the use of discretionary authority produced more rules changes, officials reported that retrospective reviews, in general, were valuable in (1) determining whether new information exists which indicates the need for revisions and (2) enabling the agency to gain new insights about its analytical methods. In addition, officials noted that retrospective reviews were useful in determining whether the rule was working as intended and helping to achieve the agency’s or statute’s goals. EPA’s review process varied by program office and by review requirement; however, most mandatory and discretionary reviews contained consistent elements. The four EPA program offices included in our review perform various functions of the agency that rarely overlap into other program offices duties. For example, OAR exclusively oversees the air and radiation protection activities of the agency, while the Office of Water solely manages the agency’s water quality activities. These two offices have different guiding statutes that require them to conduct reviews and, within those statutes, processes are sometimes outlined for how the agency should conduct the reviews. Therefore, the processes for these program offices varied. However, three elements were similar across the offices: these included formal or informal notification of the public; involvement of the public in the conduct of the review mainly through the request of public comments, science, risk, or policy assessments of the regulation; and release of the results to the public primarily through the Federal Register and the EPA Web site. In addition, mandatory and discretionary regulatory reviews that were high profile in nature (e.g., because they were conducted in response to emergencies, were contentious, or received heavy attention from the public, Congress, or regulatory experts) had the aforementioned elements as well as high-level management attention from the Assistant Administrator of the program office or the EPA Administrator. For example, the review processes of the mandatory National Ambient Air Quality Standards and the Lead and Copper review, which was initiated after elevated levels of lead were found in the District of Columbia, were defined, documented, and included extensive public involvement and high-level management attention. Figure 11 illustrates the general review process for the different review drivers. The Federal Communications Commission (FCC or the Commission) actively reviews its existing regulations to meet congressionally mandated review requirements, and to respond to petitions from regulated entities and changes in technology and market conditions, under its own discretionary authority. While FCC’s retrospective review processes vary depending on the review requirement the agency is addressing, FCC’s biennial and quadrennial review processes provide opportunities for public participation and transparency. According to FCC officials, the frequency of the biennial review requirement presents staffing challenges to the agency, while the 10-year requirement for the Section 610 review presents a challenge to the usefulness of this review, as regulations may have been previously modified under other requirements prior to the review. FCC actively reviews its existing regulations to meet congressionally mandated review requirements and to respond to petitions from regulated entities and changes in technology and market conditions under its own discretionary authority. Under the Communications Act, as amended, the Commission is subject to two agency-specific mandated reviews: (1) the biennial regulatory review of FCC telecommunications rules, and (2) the quadrennial regulatory review of the broadcast and media ownership rules. FCC officials reported that these reviews are guided by the deregulatory tenor of the Telecommunications Act, which instructed the Commission to promote competition and reduce regulation in the telecommunications and broadcast industries. The purpose of these reviews is to identify rules no longer necessary in the public interest so that they may be modified or repealed. In the 2002 biennial review, the Commission conducted and reported 89 separate review analyses of its telecommunications regulations and made more than 35 recommendations to open proceedings to consider modifying or eliminating rules. The Commission is also subject to the governmentwide review requirement to minimize significant economic impact on small entities under Section 610. FCC has initiated 3 multiyear Section 610 review projects (1999, 2002, 2005), plus 1 single-year review (2006), issuing public notices listing all rules subject to Section 610 review. Officials pointed out that these reviews rarely result in rulemaking proceedings and cited only one proceeding which resulted in the elimination of obsolete rules as a result of the Section 610 process. In addition to these mandatory requirements, FCC officials reported that the Commission reviews existing regulations at its own discretion in response to rapid changes in technology and market conditions and to petitions from regulated entities. A GAO analysis of the December 2006 Unified Agenda indicated that most of FCC’s proposed and final rule changes for that year were the result of decisions made under FCC’s discretionary authority. Of the 39 rule changes listed in the Unified Agenda, 33 were the result of decisions made at the Commission’s own discretion, while 6 of those changes were the results of mandated actions. This informal analysis indicates that, in addition to its mandatory review requirements, FCC does make efforts to review and amend regulations under its own discretion. The rule changes from the 2002 quadrennial review never went into effect. In 2004, the U.S. Court of Appeals remanded back to FCC for further review its rules for cross-media ownership, local television multiple ownership, and local radio multiple ownership (Prometheus Radio vs. F.C.C., 373 F.3d 372 (3rd Cir. 2004)). Additionally, Congress overturned FCC’s national television ownership rule which would have allowed a broadcast network to own and operate local broadcast stations reaching 45 percent of U.S. television households. Through the Consolidated Appropriations Act of 2004, Congress set the national television ownership limit at 39 percent (Pub. L. No. 108-199, 118 Stat. 3, 100 (Jan. 23, 2004)). transparency. For example, in the 2002 biennial review, FCC Bureaus and Offices issued public notices listing rules for review under their purview and requesting comments regarding the continued necessity of rule parts under review. The Bureaus and Offices published Staff Reports on the FCC Web site summarizing public comments and making determinations as to whether the Commission should open proceedings to modify or eliminate any of the reviewed rules. The Commission released Notices of Proposed Rulemaking, seeking further public comments. Officials reported that if the Commission modifies or eliminates any regulations as a result of its proceeding, that decision is announced in a rulemaking order, which is published in the Federal Register. Similarly, in the 2006 quadrennial review (which was in process at the time this report was written) the Commission released a Further Notice of Proposed Rulemaking (FNPR) and posted a Web page providing background information and hyperlinks to FCC documents relevant to the review. The FNPR requests public comment on the media ownership rules and factual data about their impact on competition, localism, and diversity. The Commission reported that it will hold six public hearings in locations around the country and make available for public comment 10 economic studies commissioned by FCC on issues related to the media ownership rules. Despite the opportunities for public participation in these regulatory reviews, the mandated structure of some review processes presents a challenge to the usefulness of FCC reviews. For example, according to an FCC official, the requirement to review the Commission’s telecommunications rules every 2 years forces Bureau staff to be constantly reviewing regulations. This official reported that the quadrennial requirement is a more appropriate time period for review, as it provides greater opportunity for regulatory changes to take hold. Additionally, an official reported that too much time between reviews can be problematic. For example, rules that require Section 610 review every 10 years may have been modified or previously reviewed as part of an overlapping review requirement or as part of a discretionary review occurring prior to the 10-year review requirement. During the 2001 through 2006 period covered in our review, the Federal Deposit Insurance Corporation (FDIC), has performed numerous retrospective reviews of its existing regulations in response to mandatory authorities such as Section 610 of the Regulatory Flexibility Act and the Economic Growth Regulatory and Paperwork Reduction Act of 1996 (EGRPRA) and at its own discretion. The focus of FDIC’s reviews has been on burden reduction, which is part of the agency’s strategic goals. The process that FDIC used to plan, conduct, and report its reviews was coordinated by a larger organizational body. The centralized review effort helped to leverage the agencies’ resources and facilitate the regulatory changes recommended as a result of the EGRPRA reviews. FDIC, along with members of the Federal Financial Institutions Examination Council (FFIEC) has examined 131 regulations under EGRPRA. FDIC conducted two Section 610 reviews after 2001, but before the initiation of the EGRPRA reviews in 2003. Because the EGRPRA review affected almost all of FDIC’s regulations, the agency subsequently included Section 610 reviews within the EGRPRA review effort. Also, the agency has conducted discretionary reviews in response to petitions and external emergencies, such as natural disasters. For instance, the agency reported reviewing its regulations to reduce burden for businesses affected by Hurricane Katrina. In doing so, the agency made 8 temporary regulatory changes to ease the burden on affected entities. FDIC also made changes to 4 regulatory areas, which included changes to 3 regulations, as a result of the EGRPRA reviews. Additionally, GAO’s review of the December 2006 Unified Agenda, indicated FDIC made changes to 5 regulations as a result of decisions under its own discretion and 4 changes as result of mandates. FDIC and the other banking agencies also worked with Congressional staff regarding legislative action as a result of the EGRPRA reviews. For example, the agencies reviewed over 180 legislative initiatives for burden relief in 2005. Furthermore, the agencies testified before the Senate Banking Committee and House Financial Services Committee on a variety of burden reduction measures and upon request, agency representatives offered technical assistance in connection with the development of legislation to reduce burden. Congress later passed and the President signed the Financial Services Regulatory Relief Act of 2006 on October 13, 2006. FDIC and other financial regulatory agencies that are members of the FFIEC decided to use the FFIEC as the coordinating body for the EGRPRA review process because the act affected all of the agencies and the agencies wanted to: (1) establish a centralized process for selecting, conducting, and reporting its reviews; and (2) leverage the expertise and resources of all of the member agencies. EGRPRA required the agencies to categorize their rules, solicit public comment, and publish the comments in the Federal Register. The act also required the agencies to report to Congress no later than 30 days after publishing the final summarized comments in the Federal Register. The FFIEC established additional processes for planning, conducting, and reporting of retrospective reviews conducted under EGRPRA outside of these specified requirements, such as providing 90 public comment periods, holding outreach meetings with regulated entities as well as consumer groups across the United States, and establishing a Web site dedicated to the EGRPRA reviews. Within all of the processes developed by the FFIEC, a high level of management attention was maintained. For instance, the Director of the Office of Thrift Savings, who is also a member of FDIC’s Board of Directors, headed the interagency effort. In this capacity, a political appointee was involved in planning, conducting, and reporting the reviews. As illustrated by figure 13, the process involved interagency coordination and review activities within each individual agency, including FDIC. During the 2001 through 2006 period covered in our review, the Small Business Administration (SBA) has reviewed its existing regulations in accordance with Section 610 of the Regulatory Flexibility Act and on it own discretion. While the purpose of the Section 610 reviews was to reduce burden, the purpose of every discretionary review was to increase effectiveness. SBA had written procedures to plan, conduct, and report its Section 610 reviews. However, the agency did not have written processes to guide planning, conduct, and reporting of discretionary reviews. Overall, SBA’s discretionary reviews have resulted more often in regulatory changes than reviews mandated by statute. Officials reported that SBA has conducted discretionary reviews based on congressional interest or industry petitions. Specifically, officials from the HUBZONE program indicated that their office receives attention from congress about the workings of their regulations, thereby prompting them to review their existing regulations. In addition, SBA’s Division of Size Standards completed 27 reviews in response to industry petitions and congressional requests. SBA also completed 4 Section 610 reviews in 2005. While the purpose of the Section 610 reviews was to reduce burden, officials from one division in SBA said that they focused many of their retrospective reviews on examining the effectiveness of their regulations by evaluating their progress on outcomes. However, they stated that because some of their regulations are linked to the regulatory activity of other agencies, they are not always able to achieve the intended outcome of the regulation. Of the reviews conducted by SBA, discretionary reviews yielded more changes to existing regulations than mandated reviews. For instance, there were no changes made to the 4 Section 610 reviews completed but there were 23 final or proposed changes to regulations in response to industry petitions. In addition, GAO’s examination of SBA’s December 2006 Unified Agenda entries indicated that 22 rule changes were the result of the agency’s discretionary authority rather than statutory mandates. SBA’s Section 610 Plan in the May 2006 Unified Agenda described procedures for conducting Section 610 reviews. The plan specifies that SBA will consider the factors identified in Section 610. The plans also specifies that the conduct of the review will be performed by the program office of jurisdiction, which entails reviewing any comments received from the public, in consultation with the Office of General Counsel (OGC) and the Office of Advocacy. The document notes that the program office may contact associations that represent affected small entities in order to obtain information on impacts of the rules. Although Section 610 does not require agencies to report the results of the reviews, SBA reported its results in the Unified Agenda. Under SBA’s standard operating procedures each program office is responsible for evaluating the adequacy and sufficiency of existing regulations that fall within its assigned responsibilities. However, according to the officials, the agency does not have a uniform way to plan, conduct, and report these discretionary reviews. In general, the agency conducts reviews in an informal manner; therefore, documentation does not exist for the procedures or standards used to conduct these reviews. However, officials described considering these factors to prioritize their reviews: (1) the level of congressional interest in a specific review, (2) OGC’s input on which rules should be reviewed, and (3) the number of petitions and appeals SBA has received regarding a particular rule. Reviews are conducted differently in the various program offices within SBA. Moreover, the agency described a high turnover of employees, which makes it important to document SBA reviews and processes. Currently, it does not appear that the agency documents its review and processes. Employee Benefit Security Administration’s (EBSA) retrospective regulatory review process was the most documented and detailed formal review process included in our review. According to EBSA officials and our review of EBSA documentation on its Regulatory Review Program (the Program), the agency established its program as a continuing and systematic process that allows the agency to periodically reviews its regulations to determine whether they need to be modified or updated. The Program takes into account technology, industry, economic, compliance and other factors that may adversely affect a rule’s continued usefulness, viewed with respect to either costs or benefits. According to program documentation, through the integration of prescribed review criteria, regulatory reviews conducted under the Program would also help EBSA to satisfy the Section 610 requirement for periodic reviews of agency regulations. In addition, the Program provides information and data that assists EBSA in conducting regulatory reviews of EBSA regulations in accordance with the requirements of Executive Order 12866. EBSA’s regulatory review process is conducted annually by a Regulatory Review Committee (RRC) composed of the Counsel for Regulation, Office of the Solicitor’s Plan Benefits and Security Division (or his delegate), and the Directors of the following offices (or their respective delegates): Office of Regulations and Interpretations, Office of Policy and Research, Office of Enforcement, Office of Health Plan Standards and Compliance Assistance, and Office of Exemption Determinations. The Director of Regulations and Interpretations (or his delegate) chairs the RRC. EBSA’s review process consists of three formal phases: (1) selection of regulations for further review, (2) substantive review of the selected regulations, and (3) reporting review results to high-level management and the public. need further review in a written report to the Assistant Secretary, including an explanation of the reasons for its recommendations. The factors that the RRC considers when preliminarily reviewing the regulations are: whether the regulation is subject to review under the RFA; whether the regulation is subject to review under statutory or Executive Order requirements other than the RFA; absolute age of regulation (time elapsed since promulgation); time elapsed since regulation was last amended and nature of amendment (major/minor); court opinions adjudicating issues arising under regulation; number of EBSA investigations that have found violations of regulation; number of public requests received for interpretation of regulation; type(s) of plans affected by the regulation; number of plans affected; cumulative number of participants and beneficiaries affected by cumulative amount of plan assets affected by regulation; relative difficulty of compliance with regulation for the regulated entities (complexity, understandability); potential for cost burden as compared with intended benefits of the extent to which development of new technology or industry practice since promulgation may reduce effectiveness of regulation; extent to which legal changes (statutory, regulatory, executive order) since promulgation of the regulation may affect its validity; significance of the regulation with respect to EBSA’s goals; significance of the regulation with respect to enforcement, compliance assistance and voluntary compliance efforts; and availability of information pertinent to evaluating the regulation. noted in the past, many agencies have not yet taken. Specifically, the program sets threshold criteria for what constitutes “significant impact” and “substantial number of entities.” GAO has reported on numerous occasions that the lack of clarity about these terms is a barrier to agency conduct of reviews and has resulted in fewer reviews being conducted. Therefore, this step in the review program appears to be a very useful factor. Under EBSA’s approach for measuring these thresholds, the rules to be reviewed each year are first subjected to quantitative analysis to determine whether they are considered to have a significant economic impact on a substantial number of small entities. For its initial Section 610 reviews, EBSA has adopted a uniform standard of $25 per plan participant to measure the discretionary impact of regulations reviewed under Section 610, and whether it constitutes a “significant economic impact.” EBSA’s definition of a small entity as an employee pension or welfare plan with fewer than 100 participants is grounded in sections 104(a)(2) and (3) of the Employee Retirement Income Security Act (ERISA), which permit the Secretary to prescribe simplified annual reports for pension and welfare plans with fewer than 100 participants. Additional details on these definitions and how they were derived can be found in the agency’s Regulatory Review Program guidance. whether the regulation overlaps, duplicates, or conflicts with other federal statutes or rules or with nonpreempted state or local statutes or rules; whether the regulation is overly complex and could be simplified without whether the regulation may be based on outdated or superseded employment, industrial, or economic practices or assumptions and whether participants and/or beneficiaries of employee benefit plans may be exposed to harm as a result; whether the regulation may impose significant economic costs on regulated entities and whether the benefit(s) or purpose(s) of the regulation could be achieved as effectively through an alternative regulatory approach that would impose less economic burden on regulated industries; whether an alternative regulatory approach that does not increase the compliance burden for regulated industries could better serve the purpose(s) of the regulation or provide better protection(s) to participants and beneficiaries of employee benefit plans; and whether it would be in the public interest to initiate particular actions (e.g., contracting a research study, promulgating a Request for Information, conducting a public hearing) within the authority of EBSA to develop information or expertise pertinent to the regulation and relevant to consideration of the above issues. in the Federal Register, before it issues a notice for proposed rulemaking. (For an illustration of this process, see fig 7 in app. IV.) Mathew J. Scire, Director, Strategic Issues (202) 512-6806, [email protected]. Tim Bober, Assistant Director, and Latesha Love, Analyst-in-Charge, managed this assignment. Other staff who made key contributions to this assignment were Matt Barranca, Jason Dorn, Tim Guinane, Andrea Levine, Shawn Mongin, Bintou Njie, Joe Santiago, Stephanie Shipman, Michael Volpe, and Greg Wilmoth.
Congress and presidents require agencies to review existing regulations to determine whether they should be retained, amended, or rescinded, among other things. GAO was asked to report the following for agency reviews: (1) numbers and types completed from 2001 through 2006; (2) processes and standards that guided planning, conducting, and reporting; (3) outcomes; and (4) factors that helped or impeded in conducting and using them. GAO evaluated the activities of nine agencies covering health, safety, environmental, financial, and economic regulations and accounting for almost 60 percent of all final regulations issued within the review period. GAO also reviewed available documentation, assessed a sample of completed reviews, and solicited perspectives on the conduct and usefulness of reviews from agency officials and knowledgeable nonfederal parties. From 2001 through 2006, the selected agencies completed over 1,300 reviews of existing regulations. The mix of reviews conducted, in terms of impetus (mandatory or discretionary) and purpose, varied among agencies. Mandatory requirements were sometimes the impetus for reviews, but agencies more often exercised their own discretionary authorities to review regulations. The main purpose of most reviews was to examine the effectiveness of the implementation of regulations, but agencies also conducted reviews to identify ways to reduce regulatory burdens and to validate the original estimates of benefits and costs. The processes and standards guiding reviews varied across agencies and the impetus and phase of the review process. They varied by the extent to which agencies applied a standards-based approach, incorporated public participation, and provided complete and transparent documentation. For example, while almost all agencies had standards for conducting mandatory reviews, only about half of the agencies had such standards for conducting discretionary views. The extent of public involvement varied across review phases, with relatively more in the selection process for discretionary reviews. Agencies more often documented all phases of mandatory reviews compared to discretionary reviews. The outcomes of reviews included amendments to regulations, changes to guidance and related documents, decisions to conduct additional studies, and confirmation that existing rules achieved the intended results. Mandated reviews, in particular, most often resulted in no changes. Agencies noted that discretionary reviews generated additional action more often than mandatory reviews. Agencies and nonfederal parties generally considered all of the various review outcomes useful. Multiple factors helped or impeded the conduct and usefulness of retrospective reviews. Agencies identified time and resources as the most critical barriers, but also cited factors such as data limitations and overlapping or duplicative review requirements. Nonfederal parties said that the lack of transparency was a barrier; they were rarely aware of the agencies' reviews. Both agencies and nonfederal parties identified limited public participation as a barrier. To help improve the conduct and usefulness of reviews, agencies and nonfederal parties suggested practices such as pre-planning to identify data needed to conduct effective reviews, a prioritization process to address time and resource barriers, high-level management support, grouping related regulations together when conducting reviews, and making greater use of diverse communication technologies and venues to promote public participation.
In 1992, the Energy Policy Act (P.L. 102-486) directed DOE to develop a voluntary reporting program to collect information on activities to reduce greenhouse gas emissions. The act required DOE to (1) develop and issue program guidelines, (2) develop forms for reporting emissions reduction activities, and (3) establish a publicly available database of this information. The program, by design, was to encourage voluntary participation and offer organizations reporting their emissions flexibility in what they reported and how they estimated their emissions reductions. Claims submitted to the program are reviewed by program managers for arithmetic accuracy and for the clarity of the information presented; however, there is no verification of supporting documentation or determination that the emissions reductions actually occurred. The program, however, requires that the persons reporting the information certify its accuracy. For the first two reporting periods (i.e., 1994 and 1995), the program received a total of 250 reports that provided information on 1,612 greenhouse gas emissions projects. For these periods, claims for reducing greenhouse gas emissions reported to the program totaled approximately 257 million tons of carbon dioxide equivalents. On October 22, 1997, President Clinton announced a three-phased Climate Change Proposal that challenged key U.S. industries to plan how they can best reduce greenhouse gas emissions. Among other initiatives was a proposal to reward organizations that would take early action to reduce their greenhouse gas emissions before any international agreements would take effect. This effort’s goal was to make any future required emissions reduction targets easier to achieve. In early December 1997, the United States and other nations met in Kyoto, Japan, and agreed to reduce their greenhouse gas emissions and set specific targets to achieve during an initial period for monitoring emissions reductions between 2008 and 2012. Specific targets varied among nations, and the United States agreed to reach a target of 7 percent below its 1990 level of emissions. A White House Task Force on Climate Change was established to address a broad array of issues relating to climate change, such as the task of working on a credit for emissions reductions through an early action program. In May 1998, preliminary information on the credit for early action indicated that the Task Force was considering several options for that program. As of October, the Task Force was continuing to receive input from industry and environmental groups on the issue. Efforts to develop a credit for early action program to reduce greenhouse gas emissions involve consideration of many issues before such a program could be implemented. We identified four issues, stated here as questions, that will have to be addressed in developing a credit for early action program. (1) How should emissions reductions be estimated? (2) How should emissions reduction ownership be determined? (3) Should the emissions reduction claims be reported at the organization, project, or some other level? and (4) How should emissions reduction claims be verified? While these issues appear straightforward, in fact, they are complicated and will require difficult choices. Various views and opinions have been offered on these issues by a variety of groups, including business, industry, public interest, and environmental groups involved in the issues of climate change and greenhouse gas reporting. Determining what qualifies as a creditable reduction of greenhouse gas emissions would likely be one of the first and primary questions in developing a credit for reductions through an early action program. Resolving this question would lay the foundation for the program and strongly influence how many other issues would be addressed. Estimating a creditable emissions reduction involves establishing a baseline, or point from which emissions reductions will be measured. Several approaches have been proposed, including a “historical baseline” of emissions for a given period, such as 1990, that is developed from an organization’s historical data on emissions. As shown in figure 1, a company’s current level of emissions may be above its 1990 historical baseline. Projected plan to reduce emissions Under a historical baseline approach, an organization takes actions to get its total emissions at or below the baseline, for example, in 1990. Once a company’s emissions fall below its historical baseline (represented by the shaded area in fig. 1), the company would be eligible for credit. DOE’s Energy Information Administration and such groups as the Edison Electric Institute have indicated that growing companies may have more difficulty reducing their emissions because their businesses and consequently their emissions are expanding. For example, a small manufacturing company that generated 5.5 million metric tons of carbon dioxide equivalents in 1990 and today generates 8.5 million metric tons might have experienced this increase because of business expansion. This company will be faced with the decision to either take steps to reduce its emissions or purchase emissions reduction credits from another company that was able to achieve reductions below its baseline. In contrast, companies in economic decline could more easily demonstrate reductions. The historical baseline was the approach selected for the Kyoto Protocol. While the Environmental Defense Fund has essentially supported the historical baseline concept, it has also noted that alternative methods would also be acceptable, if they produced greater precision or reliability. Both DOE’s Energy Information Administration and the Center for Clean Air Policy have noted that, with the historical baseline approach, only reductions below that baseline would be recognized as creditworthy. Another proposal would use a “projected baseline” that reflects what an organization believes would be its emissions over a given period of time. As shown in figure 2, with a projected baseline, an organization would take actions to get below its projected emissions level and would try to continue reducing its emissions to meet specific targets over time. Carbon dioxide equivalents (in million metric tons) Projected plan to reduce emissions Under this approach, any reduction below the projected baseline would be considered creditable (represented by the shaded area in fig. 2). In the Voluntary Reporting Program, participants have flexibility to choose which baseline approach they want to use to measure their reductions. Because the program tries to encourage participation, organizations are also given latitude in developing their baselines. So far, most of the participants have used a projected baseline. Another approach to measure emissions reductions that has been proposed is a rate-based or performance-based system that would determine emissions reductions through changes in emissions levels in relation to a predetermined unit of output of the organization. For example, measurement units could include emissions per unit of revenue earned or emissions per unit of product produced. The concept of developing a standard rate for different industries and industry sectors has also been proposed. For example, the Coalition to Advance Sustainable Technology has supported the rate-based approach because it believes that approach would accommodate a wide range of businesses and industries and attract a greater cross-section of U.S. companies to participate in early efforts to reduce their greenhouse gas emissions. Who owns the emissions reductions is another issue that will need to be addressed in developing a credit for early action program. While ownership would appear to be easily determined, it is not always clear to the involved parties. Resolving this issue is important because, without clear ownership, there may be problems in reporting and counting emissions reductions. Ownership of a reduction can be based on a legal determination, established under a contractual arrangement, or can be established by what has been called the chain of causation—who caused the emissions to occur. Central to the ownership issue are the links between parties who may view responsibility for emissions reductions differently, and each may have a legitimate argument for their perspective. An example of the links between manufacturers, retailers, consumers, and power-generating companies reflects the significance and potential complexity of the issue. An appliance manufacturer building a highly energy-efficient product with performance exceeding normal energy efficiency standards for similar products provides an opportunity for several parties to claim emissions reductions. The retailer carrying the product promotes it as a power saver. The electric utility offers rebates to customers for purchasing it. The consumer buys the product, accepts the rebate, and uses less electricity. The electric utility generates less electricity from fossil fuels, thus reducing its greenhouse gas emissions. Thus, responsibility for the emissions reductions and credit is hard to distinguish. Depending on one’s position, any of the parties—the manufacturer, the retailer, the consumer, or the electric utility—could be the owner and claim the credit. Under the flexibility of the Voluntary Reporting Program, all parties could have submitted claims from this activity. To help address the potential for duplication, the program established the concept of “direct” and “indirect” ownership, which attempts to categorize the claims. Direct ownership refers to emissions from a source owned and controlled by an organization. Indirect ownership refers to emissions that an organization, in some sense, “caused” to occur, although it did not own or control the facility producing the emissions. This approach does not, however, resolve the issue of who would be credited for the claim, and as a result, there is the potential for the double reporting of a reduction. How ownership issues are resolved would likely influence the size and scope of a credit for early action program. DOE’s Energy Information Administration and the Environmental Defense Fund have pointed out that determining ownership and reporting responsibility would influence the size and scope of a credit for early action program. Environmental Defense Fund officials have indicated that a decision might need to be made on whether all U.S. greenhouse gas emitters should report emissions reductions or whether only the largest companies, those emitting the majority of greenhouse gases, should report. This decision depends on whether the goal of the program is to stimulate wide participation, to focus on where the greatest potential for reductions can be achieved, or some combination of both goals. In this regard, the Center for Clean Air Policy has raised the question of whether participation should include fuel producers or fuel users or both and thought that a credit program should focus on fuel users. Determining how claims for emissions reductions should be reported is another important issue in designing a credit for early action program. This issue focuses on whether emissions are recognized at the project or organizationwide level. Reporting at the organization level would indicate whether an entire organization is actually reducing its overall greenhouse gas emissions. Reporting at the project level would likely reflect the positive results of selected projects but would not convey information on an organization’s overall achievement. For example, suppose a large electric power utility reported carbon dioxide reductions from replacing a boiler in one of its four coal-fired plants with a new gas-fired boiler that produced lower emissions. The company could claim the difference between the emissions of the coal- and gas-fired boilers as a reduction in carbon dioxide. While this claim could appear to reflect a reduction in emissions, if the company did not report that it also had to increase the generating time of its other three coal-fired plants, to produce the same amount of electricity, it would not have accurately reflected companywide emissions. In this case, a net increase would have occurred, not a reduction in the company’s total emissions. Some organizations believe that any emissions reductions are valuable and should be encouraged and receive some type of credit. The Edison Electric Institute believes that any effort to deny credit for reductions at the project level would discourage companies from taking early actions to reduce their greenhouse gas emissions. It believes that a more flexible approach should be taken to increase participation and reductions at this early stage of our national efforts to reduce greenhouse gas emissions. The reporting level has also been addressed by several other groups involved in the issue of reporting emissions reductions. In its position statement on credit for early action, the Center for Clean Air Policy said that participants in such a program should report on a comprehensive companywide level. The Center also stated that adjustments should be made for changes to or replacements of a company’s assets. The Environmental Defense Fund has expressed support for companywide reporting over project-level reporting for similar reasons, namely that the latter does not provide an accurate picture of a company’s total emissions reductions. DOE’s Energy Information Administration has stated that, without companywide reporting, it would not be possible to determine if a company’s overall emissions were reduced. While the Voluntary Reporting Program permitted emissions reduction claims at both the organization and project level, the program was not designed to automatically grant credit for emissions reductions and thus preserved opportunities to report alternative approaches. Providing some assurance that claims for emissions reductions are legitimate and accurately developed will also be a key issue in determining any credits for reductions through an early action program. There appears to be a consistent view that these claims would need to receive some type of review and verification. The options for verification range from self-review and -certification to an independent third-party review. The Voluntary Reporting Program uses self-review and -certification, with program managers reviewing reported information for internal consistency, accurate calculations, and comparisons with other sources of information. However, the program has no procedures to review or verify the supporting documentation to determine if emissions reductions actually occurred. Program officials at the Energy Information Administration said that accurate reporting is encouraged because the reports are open to public scrutiny and that it is illegal to knowingly submit false information on a certified submission. In its position paper on an early credit program, the Coalition to Advance Sustainable Technology, addressed the benefits of establishing a technical group to develop guidelines, standards for quantifying estimates, and protocols for making emissions reduction claims and their review. In contrast, the U.S. Initiative on Joint Implementation, which has been promoting joint initiatives between U.S. companies and non-U.S. partners to reduce emissions of greenhouse gases, is in the process of developing procedures for an independent third-party review and verification of projects included in the initiative. Many of the claims for reductions of greenhouse gas emissions submitted to the Voluntary Reporting Program would probably be ineligible for credit depending on the restrictive nature of the crediting mechanism. While the voluntary program was designed to encourage wide participation by allowing companies to submit emissions reduction claims under flexible alternative reporting criteria, it was not designed to automatically provide emissions credits to participants. A program to grant credits for early actions taken to reduce greenhouse gas emissions would probably require more restrictive reporting criteria to help ensure that the reductions claimed are real, not being double reported by others, and accurately determined. The Voluntary Reporting Program was designed to provide companies with wide flexibility in reporting their claims for reductions of greenhouse gas emissions. Depending on the type of credit program developed, reductions reported under the voluntary program may or may not meet the new reporting criteria. According to DOE’s Energy Information Administration and industry experts whom we identified that were considering options on how a credit for early action program might be developed, the program could establish more restrictive definitions of, among other things, the baseline, or point from which to measure reductions, than the voluntary program. With that in mind, we analyzed two sets of more restrictive reporting criteria that potentially could be part of a future credit for early action program and compared them to the voluntary program. Each set of criteria varied by the range of restrictions reflecting lower and upper boundaries placed upon the participants. Table 1 analyzes each of the four basic issues needing resolution in an early credit program by using three sets of reporting criteria—the current flexible Voluntary Reporting Program and two sets of more restrictive criteria. The first column lists the four basic issues as questions to decide. The second column describes the flexible criteria currently used in the Voluntary Reporting Program. The third and fourth columns describe the more restrictive criteria. Comparing the two sets of more restrictive reporting criteria that could be part of a future credit for early action program (see table 1) to claims reported to the Voluntary Reporting Program indicates that many would probably be ineligible for credit. According to information from DOE’s Energy Information Administration, the issues of (1) how emissions reductions should be estimated and (2) whether the emissions reduction claims should be reported at the organization, project, or some other level illustrate why many reported claims would probably be ineligible for credit. For example, according to the Energy Information Administration’s reports summarizing the results of the first and second years of the Voluntary Reporting Program, only 22 of the 250 companies reporting, or about 9 percent, reported organizationwide reductions from some historical baseline and thus would meet the “Restricted” reporting criteria. With regard to the issue of reporting level, only 91, or about 36 percent, of the organizations claimed emissions reductions for their entire company and thus would meet the organizationwide reporting criteria of the “Somewhat restricted” reporting criteria. Since we could not easily determine how many of the current participants to the Voluntary Reporting Program are reporting ownership of emissions reductions that may also be reported by others, we did not compare the criteria for it to the two sets of more restrictive reporting criteria. Under the Voluntary Reporting Program, no independent verification was required. Therefore, if some form of independent verification were required to receive a credit for reductions through an early action program, none of the current claims submitted to the Voluntary Reporting Program would be automatically eligible without further review or some demonstration that independent verification had been done. We provide further perspective on these four issues by selecting some examples of actual reduction claims submitted to the Voluntary Reporting Program to show how they would fare under more restrictive reporting criteria (see app. III.) We provided a draft of this report to DOE for its review and comment. We obtained comments on the results of our work from the Department of Energy, including the Director, Office of Economic, Electricity, and Natural Gas Analysis, Office of the Assistant Secretary for Policy; and the Director, International Greenhouse Gases, and Macroeconomic Division (the division that is responsible for administering the Voluntary Reporting Program), Energy Information Administration. DOE agreed with the information in the report and observed that while the Voluntary Reporting Program was not specifically designed to automatically provide credit, it does provide a mechanism for organizations to demonstrate their achieved reductions of greenhouse gas emissions, should a credit for early action program be established. In this regard, we clarified the text throughout the report to include this observation. DOE also made other clarifying comments, which we incorporated as appropriate. We conducted our work from July 1998 through October 1998 in accordance with generally accepted government auditing standards. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies to the Secretary of Energy and the Administrator of the Energy Information Administration. We will also make copies available to others on request. If you or your staff have any questions concerning this report, please call me at (202) 512-3841. Major contributors to this report are listed in appendix IV. To address the question of issues associated with developing a credit for early action program, we primarily reviewed documentation and conducted interviews with the Department of Energy (DOE) and other groups involved in reporting emissions of greenhouse gases. We conducted interviews on issues related to reporting greenhouse gas emissions and the concept of a credit for early action with officials and professional staff from DOE’s Energy Information Administration’s Voluntary Reporting Program. We reviewed the Voluntary Reporting Program’s reporting guidance and issues identified by the program officials through their experience with emissions reduction claims. We also reviewed the program’s summary reports and a cross section of reports claiming emissions reductions by participants in the program. We obtained the views and the perspectives of other public and private sector groups involved in issues relating to global climate change and greenhouse gas emissions. We obtained and reviewed available documentation from several organizations involved with the issues, including the Center for Clean Air Policy, the Coalition to Advance Sustainable Technology, the Environmental Defense Fund, the Edison Electric Institute, the Pew Center on Global Climate Research, the Nature Conservancy, and Resources for the Future. When possible, we interviewed some of these groups to obtain additional information on their positions. We also obtained and reviewed related reports and information from the Congressional Research Service and the Environmental Protection Agency. To address the question of how emissions reduction claims that are submitted to the Voluntary Reporting Program might be considered under a credit for reductions through an early action program, we reviewed program data on the basic issues and judgmentally selected several examples of claims that reflected some of those basic issues. We then considered two sets of more restrictive reporting criteria that could potentially be part of a credit for early action program and compared them to the Voluntary Reporting Program. This comparison highlights some of the decisions that have to be made in developing a credit for early action program. As agreed with your office, we did not present the names of the organizations whose emissions reduction claims we used to, in part, illustrate how some claims could fare under different emissions reporting criteria. This appendix provides a brief description of some of the other issues that may need to be considered in designing a credit program for early actions to reduce greenhouse gas emissions. These descriptions are not intended to be all inclusive but rather a brief overview of each issue and why it is important. An emissions trading system provides a vehicle for the transfer of ownership of emissions reduction credits from one party to another. Some groups have suggested that a trading system would be an incentive for organizations to participate in an early credit program. This is because some organizations may have difficulty reducing their greenhouse gas emissions, while others may be capable of reducing their emissions significantly. Therefore, an emissions trading system provides an economic incentive for companies to achieve maximum levels of emission reductions at the least cost. Companies could choose the lower cost option of either buying credits or making the changes in their operations to reduce its own emissions. Carbon sequestration is the capturing of carbon dioxide from the atmosphere through the process of photosynthesis. It plays a significant role in reducing the amount of carbon dioxide in the atmosphere; each year, about 100 billion metric tons of carbon dioxide is captured in trees and other vegetation throughout the world. At issue, are concerns about how estimates are developed and what source data are used. For example, according to a recent Congressional Research Service report examining sequestration projects reported to the Voluntary Reporting Program, the sequestration claims were difficult to compare because of variations in how the quantities were measured and the source data used for the estimates. Therefore, how sequestration projects will be handled in a credit for early action program becomes an important issue. There are differing views on the issue of whether to recognize emissions reductions that would have occurred anyway, without the incentive of a credit for early action program. Some organizations thought that if an organization took an action that would be considered part of its normal business activities and it also happened to reduce greenhouse gas emissions, it should not receive recognition for this reduction because it would have occurred anyway. Others organizations, including Edison Electric Institute, believe that any efforts to reduce greenhouse gases are worthy of some type of recognition and putting restrictions on these kinds of reductions could discourage participation in an early action program. Numerous gases affect the Earth’s atmosphere and act as “greenhouse gases” which trap heat from sunlight at, or close to, the Earth’s surface. In addition to the six greenhouse gases that were recognized in the Kyoto Protocol, other greenhouse gases, as well as other gases that have “indirect effects” on global warming because they may contribute to the buildup or decomposition of the greenhouse gases in the atmosphere. Some of these gases include carbon monoxide and volatile organic compounds other than methane. Because the Voluntary Reporting Program allows the reporting of these gases, there may be a need to consider to what extent they should be included in an early action program. Should an organization reporting to the Voluntary Reporting Program be treated differently under a new credit for early action program? How will growing companies be able to reduce greenhouse gas emissions without affecting economic success? Should growing and declining companies be treated the same under a credit for early action program? Should there be restrictions on who is eligible to receive emissions credits? Should a credit for early action program be focused on the entire U.S. economy or selected segments that represent the majority of greenhouse gas emitters? How should a historical base for measuring reductions be adjusted for corporate mergers and acquisitions? How should companies having no historical data be treated? The following examples of actual claims of greenhouse gas emissions reductions that companies have reported to the Voluntary Reporting Program serve to (1) further illustrate the four basic issues that will likely need to be addressed in designing a credit for early action program and (2) show how such claims may be evaluated if more restrictive reporting criteria were established. We used examples contained in the DOE’s Energy Information Administration publications that summarize the results of the program’s first and second years. We supplemented this information with information contained in the program’s public database. The first issue is how should greenhouse gas emissions reductions be estimated. A large investor-owned utility located in the Midwest produces electric power from several fossil-based plants and one nuclear plant. It compared its 1991, 1992, and 1993 emissions to those that had occurred in 1990 to calculate its emissions reductions. However, in 1994 its nuclear plant was shut down because of an equipment failure. To compensate for the lost electricity that had been generated from its nuclear power plant, it increased generation from its fossil plants, reduced sales, and purchased electricity from another company. As a result, in 1994, its emissions rose for the first time beyond its 1990 baseline and it reported an emissions increase. This example meets the “Restricted” reporting criterion shown in table 1 on determining reductions from a historical baseline. However, the utility would not have received credit in 1994 because its emissions increased above its 1990 level. The second basic issue is how should emissions reductions ownership be determined. A large investor-owned utility took several specific actions to improve the reliability and performance of its two nuclear power generators (one unit is 100 percent owned by the utility and the other unit is 41 percent owned). One action increased the time between refueling from 18 months to 24 months. Another action decreased the number of days for each refueling outage. A third action improved maintenance procedures, which reduced forced outages and automatic shutdowns. As a result of these actions, the utility claimed total cumulative reductions in carbon dioxide emissions from 1991 to 1994 compared with its 1990 baseline of more than 11 million metric tons. The utility reported only 41 percent (its ownership share) of the emissions reductions for the second unit. This example meets the “Restricted” reporting criterion shown in table 1 on reporting only those emissions reductions that are directly owned. Another example helps to clarify the differences between “direct” and “indirect” ownership of emissions reductions. A printing company based in Wisconsin initiated some projects to reduce its own and its employees’ demand for transportation services. These projects included (1) a return load policy requiring its trucks not to return empty, thus saving 8 million vehicle miles per year; (2) a change from three 8-hour shifts to two 12-hour shifts, which allows employees to work fewer days per year, thus reducing their commuting trips and associated emissions by an estimated 20 million fewer miles in 1995; (3) the redevelopment of an existing building structure which was closer to town and workers’ homes than a new proposed site, thus saving them an estimated 3.5 million vehicle miles in 1995; and (4) an arrangement for the public transportation system to have buses provide service to its plants, thus reducing the number of employees’ vehicle round trips by 23,185 and saving more than 20,000 gallons of gasoline. Under the Voluntary Reporting Program, the company claimed “direct” emissions reductions associated with the return load policy, and “indirect” reductions associated with the shift change, the building redevelopment, and the public transportation projects. Under the “Restricted” reporting criterion shown in table 1 on obtaining credit for only those reductions directly owned, the company would receive credit only for the return load policy savings because the company directly owned the trucks and their emissions. Under the “Somewhat restricted” criterion on ownership, the company would receive credit for the indirect reductions in workers’ driving miles if the company could show that the employees and the bus company did not claim them. The third basic issue concerns the level at which emissions reductions should be reported. An investor-owned utility located in the Midwest built a 16 megawatt natural-gas fired cogeneration facility to meet the electricity and steam needs of a grain-processing company. The grain company retired its own coal-fired boilers and less efficient gas-fired boilers that had been used to make the steam needed for its operations. On a project-level basis, the utility reported direct and indirect emissions reductions resulting from the operation of the cogeneration facility and the shutdown of the grain processor’s steam boilers. The utility claimed direct emissions reductions by comparing electricity generation that would have otherwise occurred at its coal-fired plant. It also claimed indirect reductions from, among other things, the grain company’s previous replacement of the coal-fired steam boilers with gas-fired steam boilers. None of the company’s claims would have been accepted under the “Restricted” or “Somewhat restricted” criteria because these claims were reported at a project level, and it is unknown whether the company as a whole had net reductions. The last basic issue concerns how are emission reductions verified. All companies’ reporting to the Voluntary Reporting Program are required to self-certify the accuracy of their emissions reduction estimates. Independent or third-party verification is not required. As a result, we were not able to find any company that went beyond the self-certification process. Daniel M. Haas Richard E. Iager Michael S. Sagalow The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on the Department of Energy's (DOE) proposal to develop a credit for an early action program promoting environmental cleanups, focusing on: (1) some of the basic issues that have to be addressed by any effort to develop a credit for early action program; and (2) how claims for reductions of greenhouse gas emissions that are reported to the Voluntary Reporting Program might fare under a credit for early action program that has less flexible reporting criteria. GAO noted that: (1) it identified four basic issues that will have to be addressed to develop a credit for early action program to reduce greenhouse gas emissions: (a) how emissions reductions should be estimated; (b) how emissions reduction ownership should be determined; (c) whether emissions reduction claims should be reported at the organization, project, or some other level; and (d) how emissions reduction claims should be verified; (2) on the surface, these issues appear straightforward; in fact, they are complicated and will require difficult choices; (3) furthermore, the resolution of these issues will likely influence the design of a credit for early action program; (4) the amount of flexibility such a program would provide on each of these issues would ultimately help to determine the extent of participation and the credit awarded; (5) many of the claims for reducing greenhouse gas emissions that have been submitted to the Voluntary Reporting Program would probably be ineligible for credit under a new program having more restrictive reporting criteria; (6) this is because the voluntary program was designed to encourage wide participation by allowing companies to submit emissions reduction claims under flexible reporting criteria and was not designed to automatically provide credit to participants for emissions reductions; (7) for example, the voluntary program, among other things, allowed companies discretion in determining the basis from which their emissions reductions were estimated and allowed companies to self-certify that their claims were accurate; and (8) according to DOE's Energy Information Administration and other organizations, such as the Edison Electric Institute and the Environmental Defense Fund, a credit for early action program could require more restrictive reporting criteria than the Voluntary Reporting Program to help ensure that emissions reduction claims are real, appropriately reviewed, and verified.
DOD is a massive and complex organization. To illustrate, the department reported that its fiscal year 2006 operations involved approximately $1.4 trillion in assets and $2.0 trillion in liabilities; more than 2.9 million in military and civilian personnel; and $581 billion in net cost of operations. Organizationally, the department includes the Office of the Secretary of Defense, the Chairman of the Joint Chiefs of Staff, the military departments, numerous defense agencies and field activities, and various unified combatant commands that are responsible for either specific geographic regions or specific functions. In support of its military operations, the department performs an assortment of interrelated and interdependent business functions, including logistics management, procurement, health care management, and financial management. As we have previously reported, the DOD systems environment that supports these business functions is overly complex and error-prone, and is characterized by (1) little standardization across the department, (2) multiple systems performing the same tasks, (3) the same data stored in multiple systems, and (4) the need for data to be entered manually into multiple systems. Moreover, DOD recently reported that this systems environment is comprised of approximately 3,100 separate business systems. For fiscal year 2006, Congress appropriated approximately $15.5 billion to DOD, and for fiscal year 2007, DOD has requested about $16 billion in appropriated funds to operate, maintain, and modernize these business systems and associated infrastructure. As we have previously reported, the department’s nonintegrated and duplicative systems contribute to fraud, waste, and abuse. In fact, DOD currently bears responsibility, in whole or in part, for 15 of our 27 high-risk areas. Eight of these areas are specific to DOD and the department shares responsibility for 7 other governmentwide high-risk areas. DOD’s business systems modernization is one of the high-risk areas, and it is an essential enabler to addressing many of the department’s other high-risk areas. For example, modernized business systems are integral to the department’s efforts to address its financial, supply chain, and information security management high-risk areas. Effective use of an enterprise architecture—a modernization blueprint—is a hallmark of successful public and private organizations. For more than a decade, we have promoted the use of architectures to guide and constrain systems modernization, recognizing them as a crucial means to this challenging goal: optimally defined operational and technological environments. Congress, the Office of Management and Budget (OMB), and the federal Chief Information Officer’s (CIO) Council have also recognized the importance of an architecture-centric approach to modernization. The Clinger-Cohen Act of 1996 mandates that an agency’s CIO develop, maintain, and facilitate the implementation of an information technology (IT) architecture. Furthermore, the E-Government Act of 2002 requires OMB to oversee the development of enterprise architectures within and across agencies. In addition, we, OMB, and the CIO Council have issued guidance that emphasizes the need for system investments to be consistent with these architectures. An enterprise architecture provides a clear and comprehensive picture of an entity, whether it is an organization (e.g., a federal department) or a functional or mission area that cuts across more than one organization (e.g., financial management). This picture consists of snapshots of both the enterprise’s current (“As Is”) environment and its target (“To Be”) environment. These snapshots consist of “views,” which are one or more interdependent and interrelated architecture products (e.g., models, diagrams, matrices, and text) that provide logical or technical representations of the enterprise. The architecture also includes a transition or sequencing plan, which is based on an analysis of the gaps between the “As Is” and “To Be” environments. This plan provides a temporal road map for moving between the two environments and incorporates such considerations as technology opportunities, marketplace trends, fiscal and budgetary constraints, institutional system development and acquisition capabilities, legacy and new system dependencies and life expectancies, and the projected value of competing investments. The suite of products produced for a given entity’s enterprise architecture, including its structure and content, is largely governed by the framework used to develop the architecture. Since the 1980s, various architecture frameworks have been developed, such as John A. Zachman’s “A Framework for Information Systems Architecture” and the DOD Architecture Framework. The importance of developing, implementing, and maintaining an enterprise architecture is a basic tenet of both organizational transformation and systems modernization. Managed properly, an enterprise architecture can clarify and help optimize the interdependencies and relationships among an organization’s business operations (and the underlying IT infrastructure and applications) that support these operations. Moreover, when an enterprise architecture is employed in concert with other important management controls, such as portfolio-based capital planning and investment control practices, architectures can greatly increase the chances that an organization’s operational and IT environments will be configured to optimize mission performance. Our experience with federal agencies has shown that investing in IT without defining these investments in the context of an architecture often results in systems that are duplicative, not well integrated, and unnecessarily costly to maintain and interface. One approach to structuring an enterprise architecture is referred to as a federated enterprise architecture. Such a structure treats the architecture as a family of coherent but distinct member architectures that conform to an overarching architectural view and rule set. This approach recognizes that each member of the federation has unique goals and needs as well as common roles and responsibilities with the levels above and below it. Under a federated approach, member architectures are substantially autonomous, although they also inherit certain rules, policies, procedures, and services from higher-level architectures. As such, a federated architecture enables component organization autonomy, while ensuring enterprisewide linkages and alignment where appropriate. Where commonality among components exists, there are also opportunities for identifying and leveraging shared services. A service-oriented architecture (SOA) is an approach for sharing business capabilities across the enterprise by designing functions and applications as discrete, reusable, and business-oriented services. As such, service orientation permits sharing capabilities that may be under the control of different component organizations. As we have previously reported, such capabilities or services need to be, among other things, (1) self-contained, meaning that they do not depend on any other functions or applications to execute a discrete unit of work; (2) published and exposed as self- describing business capabilities that can be accessed and used; and (3) subscribed to via well-defined and standardized interfaces. A SOA approach is thus not only intended to reduce redundancy and increase integration, but also to provide the kind of flexibility needed to support a quicker response to changing and evolving business requirements and emerging conditions. The Office of the Assistant Secretary of Defense (Networks and Information Integration)/Chief Information Officer (ASD(NII)/CIO), reports that it is developing a strategy for federating the many and varied architectures across the department’s four mission areas—Warfighting, Business, DOD Intelligence, and Enterprise Information Environment. According to ASD(NII)/CIO officials, they are drafting a yet-to-be-released strategy for evolving DOD’s Global Information Grid architecture, so that it provides a comprehensive architectural description of the entire DOD enterprise, including all mission areas and the relationships between and among all levels of the enterprise (e.g., mission areas, components, and programs). Figure 1 provides a simplified depiction of DOD’s EA federation strategy. ASD(NII)/CIO officials stated that the goal of this strategy is to improve the ability of DOD’s mission areas, components, and programs to share architectural information. In this regard, officials stated that the DOD EA federation strategy will define (1) federation and integration concepts, (2) alignment (i.e., linking and mapping) processes, and (3) shared services. The BMA federation strategy, according to these officials, is the first mission area federation strategy, and it is their expectation that the other mission areas will develop their own respective federation strategies. In 2005, the department reassigned responsibility for directing, overseeing, and executing its business transformation and systems modernization efforts to the Defense Business Systems Management Committee (DBSMC) and the Business Transformation Agency (BTA). At that time, it also adopted a tiered accountability approach to business transformation. Under tiered accountability, responsibility and accountability for business architectures and systems investment management was allocated among the DOD enterprise, component, and program levels, depending on such factors as the scope, size, and complexity of each investment. The DBSMC is chaired by the Deputy Secretary of Defense and serves as the highest-ranking governance body for business systems modernization activities. According to its charter, the DBSMC provides strategic direction and plans for the BMA in coordination with the Warfighting and Enterprise Information Environment Mission Areas. The DBSMC is also responsible for reviewing and approving the BEA and the ETP. In addition, the DBSMC recommends policies and procedures required to integrate DOD business transformation and attain cross-department, end-to-end interoperability of business systems and processes. The BTA operates under the authority, direction, and control of the DBSMC and reports to the Under Secretary of Defense for Acquisition, Technology, and Logistics in the incumbent’s capacity as the vice chair of the DBSMC. Oversight for this agency is provided by the Deputy Under Secretary of Defense for Business Transformation, and day-to-day management is provided by the director. The BTA’s primary responsibility is to lead and coordinate business transformation efforts across the department. Regarding the BEA, the BTA is responsible for (1) maintaining and updating the department’s architecture; (2) ensuring that functional priorities and requirements of various defense components, such as the Department of the Army and Defense Logistics Agency (DLA), are reflected in the architecture; and (3) ensuring the adoption of DOD- wide information and process standards as defined in the architecture. Under DOD’s tiered accountability approach to systems modernization, components are responsible for defining their respective component architectures and transition plans while complying with BEA and ETP policy and requirements. Similarly, program managers are responsible for developing program-level architectures and transition plans and ensuring integration with the architectures and transition plans developed and executed at the DOD enterprise and component levels. Between May 2001 and July 2005, we reported on DOD’s efforts to develop an architecture and identified serious problems and concerns with the department’s architecture program, including the lack of specific plans outlining how DOD plans to extend and evolve the architecture to include the missing scope and detail. To address these concerns, in September 2003 we recommended that DOD develop a well-defined near-term plan for extending and evolving the architecture and ensure that this plan includes addressing our recommendations, defining roles and responsibilities of all stakeholders involved in extending and evolving the architecture, explaining dependencies among planned activities, and defining measures of progress for the activities. In response to our recommendations, in 2005, DOD adopted a 6-month incremental approach to developing its enterprise architecture and released version 3.0 of the BEA and the ETP in September 2005, describing them as the initial baselines. DOD further released version 3.1 on March 15, 2006, and version 4.0 on September 28, 2006. As we have previously reported, these incremental versions have provided additional content and clarity and resolved limitations that we identified in the prior versions. For example, DOD reports that version 4.0 begins to define a key business process area missing from prior versions—the planning, programming, and budgeting process area. In this regard, according to DOD, the architecture includes departmental and other federal planning, programming, and budgeting guidance (e.g., OMB Circular A-11) and some high-level activities associated with this area. In addition, DOD reports that version 4.0 included restructured business process models to reduce data redundancy and ensure adherence to process modeling standards (e.g., eliminated numerous process modeling standards violations and stand-alone process steps with no linkages). We concluded, however, that these incremental versions were still not sufficiently complete to effectively and efficiently guide and constrain business system investments across the department. In particular, we reported that the BEA was not yet adequately linked to the component architectures and transition plans, which is important given that the department (1) had previously announced that it had adopted a federated approach to developing and implementing the architecture and (2) had yet to address our recommendation from September 2003 for developing an architecture development management plan that defined how it intended to extend and evolve its BEA. Accordingly, in May 2006 we recommended that DOD submit an enterprise architecture development management plan to defense congressional committees. We stated that at a minimum, the plan should define what the department’s incremental improvements to the architecture and transition plan would be and how and when they would be accomplished, including what (and when) architecture and transition plan scope and content and architecture compliance criteria would be added into which versions. In addition, we stated that the plan should include an explicit purpose and scope for each version of the architecture, along with milestones, resource needs, and performance measures for each planned version, with particular focus and clarity on the near-term versions. In response, DOD stated that, in the future, the ETP and annual report to Congress would provide additional high-level milestones for BTA activities, including the additional detail for the capability improvements to be addressed by the BEA. Our August 2006 report on the maturity of federal agency enterprise architecture programs, including those of the military departments, reemphasized the importance of DOD having an effective plan for federating its BEA. Specifically, the August report showed that the Departments of the Air Force, Army, and Navy had not satisfied about 30, 55, and 30 percent, respectively, of the 31 core elements in our Enterprise Architecture Management Maturity Framework, which is a five-stage model for effectively managing architecture governance, content, use, and measurement. In addition, the Army had only fully satisfied 1 of the 31 core elements. (See table 1 for the number of elements that were fully, partially, and not satisfied by each of the military departments.) By comparison, the other major federal departments and agencies that we reviewed had as a whole fully satisfied about 67 percent of the framework’s core elements. Among the key elements that all three military departments had not fully satisfied were developing architecture products that describe their respective target architectural environments and developing transition plans for migrating to a target environment. Furthermore, while the military departments had partially satisfied between 8 and 13 core elements in our framework, we reported that partially satisfied elements are not necessarily easy to satisfy fully, such as those that address architecture content and thus have important implications for the quality and usability of an architecture. To assist the military departments in addressing enterprise architecture challenges and managing their architecture programs, we recommended that the military departments develop and implement plans for fully satisfying each of the conditions in our framework. The department generally agreed with our findings and recommendations. DOD’s BMA federation strategy provides a foundation on which to build and align DOD’s parent business architecture (the BEA) with its subordinate architectures (i.e., component- and program-level architectures). In particular, this strategy (1) states the department’s federated architecture goals; (2) describes federation concepts that are to be applied; and (3) includes high-level activities, capabilities, products, and services that are intended to facilitate implementation of the concepts. However, DOD has yet to define the details needed to execute the strategy, such as how the architecture federation will be governed; how alignment with the DOD EA federation strategy and other potential mission area federation strategies will be achieved; how component architectures’ alignment with incremental versions of the BEA will be achieved; how shared services will be identified, exposed, and subscribed to; and what milestones will be used to measure progress and results. According to BTA program officials, including the chief technical officer, the department is in the early stages of defining and implementing its strategy and intends to develop more detailed plans. As a result, much remains to be decided and accomplished before DOD will have in place the means to create a federated architecture and thus be able to satisfy both our prior recommendations and legislative requirements aimed at adopting an architecture-centric approach to departmentwide business systems investment management. BTA released the BMA federation strategy in September 2006. According to the strategy, its purpose is to expand on the DOD EA federation strategy and provide details on how various aspects of the federation will be applied within the department’s BMA. In this regard, the BMA strategy cites the following four goals: establish a capability to search for data in member architectures that may be relevant for analysis, reference, or reuse; develop a consistent set of standards for architecture configuration management that will enable users to determine the development status and quality of data in various architectures; establish a standard methodology for specifying linkages among existing component architectures that were developed using different tools and that are maintained in independent repositories; and develop a standard methodology to reuse capabilities described by various architectures. To assist in accomplishing these goals, the strategy describes three concepts that are to be applied. 1. Tiered accountability, which provides for architecture development at each of the department’s organizational levels. Under this concept, each level or tier—enterprise, component, and program—has its own unique goals as well as responsibilities to the tiers above and below it. More specifically, the BTA has responsibility for the enterprise tier, including common, DOD-wide requirements and standards, while components and programs are responsible for defining component- and program-level architecture requirements and standards for their respective tiers of responsibility that are aligned with the departmentwide requirements and standards. As such, this concept introduces the need for autonomy, while also seeking to ensure linkages and alignment from the program level through the component level to the enterprise level. 2. Net-centricity, which provides for seamless and timely accessibility to information where and when needed via the department’s interconnected network environment. This concept includes infrastructure, systems, processes, and people and is intended to ensure that users (i.e., people, applications, and platforms) of information at any level can both take what they need and contribute what they know. 3. Federating DOD architectures, which provides for linking or aligning different architectures via the mapping of common architectural information. This concept advocates subordinate architecture alignment to the parent architecture(s). Figure 2 shows a simplified version of DOD’s BMA federated architecture. To support the achievement of its goals and implementation of its concepts, the strategy also describes three categories of high-level activities, capabilities, products, and services—governance, federating architecture operational views, and federating architecture systems views. Table 2 shows the strategy’s operational and systems view related activities, capabilities, products, and services. Relevant architecture management guidance states that organizations should develop executable architecture development management plans and that these plans should specify, among other things, tasks to be performed, resources needed to perform these tasks (e.g., funding, staffing, tools, and training), roles and responsibilities, time frames for completing tasks, and performance measures. As previously stated, we have recommended that DOD develop such an architecture development plan to govern the evolution and extension of the BEA. We also have previously reported that a SOA approach needs to ensure that shared systems and applications (i.e., services) are, among other things, defined, developed, exposed, and subscribed to. The high-level construct of DOD’s BMA federation strategy and the yet-to- be-issued DOD EA federation strategy reinforces the need to implement our recommendation. In particular, the strategy defines the department’s federated architecture goals; describes federation concepts that are to be applied; and explains high-level activities, capabilities, products, and services intended to facilitate implementation of the concepts. However, it does not adequately define the tasks needed to achieve the strategy’s goals, including those associated with executing high-level activities and providing related capabilities, products, and services. Specifically, the strategy does not adequately address how strategy execution will be governed, including assignment of roles and responsibilities, measurement of progress and results, and provision of resources. In addition, while the BMA strategy refers to several activities that are to be provided by the yet- to-be-issued DOD EA federation strategy, it does not clearly describe the relationships, dependencies, and touch points between the two strategies. Also, the strategy does not address, among other things, how the architectures of the military departments will align with the latest version of the BEA and how DOD will identify and provide for sharing of common applications and systems across the department. Moreover, the strategy does not include milestones for executing the activities and related capabilities, products, and services. According to ASD(NII)/CIO officials, each mission area will be responsible for establishing its own governance structures, to include defined roles and responsibilities of its members (i.e., components and programs), and such governance disciplines as measurement of progress and results and provision of resources. Moreover, officials from DOD components, such as the DLA and the Defense Information Systems Agency (DISA), told us that clearly defined and understood federation roles and responsibilities are critical to successfully executing the BMA strategy. However, the BMA strategy does not clearly define the respective roles and responsibilities of each member of the federation (i.e., enterprise, component, and program). It also does not identify the resource commitments (e.g., funding, staffing, tools, and training) needed to execute the strategy’s activities and deliver capabilities, products, and services, or identify how fundamental governance disciplines will be performed, including performance and progress measurement. For example: The strategy states that the DBSMC, which is currently responsible for the approval and maintenance of the BEA, will receive updates on how component (e.g., the military departments) architectures are aligning to the BEA. However, it does not describe which organizational entities are to be responsible for providing these updates or for aligning component and program architectures to the BEA. The strategy states that in conjunction with the DOD investment review boards, the DBSMC will set the business priorities at the enterprise level through the identification of gaps in business capabilities. By establishing these priorities, the DBSMC is to determine where and when specific capabilities are addressed within the different architectures (i.e., from BEA to program-level architectures) and is to approve recommended solutions to business capability needs. However, the strategy does not provide information on who is responsible for ensuring that component priorities fit with the overall enterprise priorities, or how the DBSMC will otherwise be provided the information it needs to fulfill its stated decision- making role. The strategy states that BMA stakeholders will need to be trained to understand the concepts presented in the strategy and begins to identify topics, such as SOA and the overall federation strategy. However, the strategy does not identify time frames and the entity responsible for providing and overseeing such training. In addition, the strategy does not address how it will be funded and staffed. The strategy identifies categories of high-level activities, capabilities, products, and services intended to facilitate implementation of the concepts, but it does not provide for metrics that can be used to gauge the progress and ensure that expected results are realized. According to the BMA federation strategy, the DOD EA federation strategy outlines an approach for linking the repositories of all of the department’s various architectures and enabling search and navigation across them. In addition, it states that the DOD EA federation strategy outlines a series of pilot efforts that will demonstrate this approach. However, the BMA federation strategy does not clearly define how its various activities will integrate with the activities and concepts described in the yet-to-be-issued DOD EA federation strategy, or other potential mission area federation strategies, nor does it discuss how these activities will be carried out or who will be responsible for accomplishing them. For example: ASD(NII)/CIO officials told us that the DOD EA federation strategy will establish new responsibilities for components and programs for making architecture information understandable and accessible across the department. However, these responsibilities are not explicitly discussed in the BMA federation strategy. Therefore, it is unclear how these new responsibilities are relevant to federating the BEA. Moreover, it is unclear how the BMA roles and responsibilities relate to the yet-to-be-released EA federation strategy roles and responsibilities. The BMA federation strategy does not define how linkages among the BEA and the various component and program architectures will be established, including whether program architectures will be linked to component architectures as well as the BEA, or if program architectures will be linked to the BEA, as is currently the case. Moreover, it is not clear if establishing these linkages will be the responsibility of the programs, components, the BTA, or ASD(NII)/CIO. According to the BMA federation strategy, it builds on the DOD EA federation strategy by proposing new tools and procedures to both identify overlaps and gaps in capabilities and ensure the compliance of all component and program architectures with the BEA. In this regard, it describes the following two tools: the Investment Management Framework, which is a spreadsheet that aligns program architectures’ capabilities (and activities) with the BEA, and the Architecture Compliance and Requirements Traceability tool, which is an automated tool that provides programs with an interface to the BEA so that they can assess their alignment with the BEA’s operational view content (e.g., business capabilities, activities, processes, rules, and standards). However, the strategy does not address how alignment of component architectures with the BEA is to be achieved, including what, if any, component architecture alignment guidance, criteria, and tools are to be developed and who will develop them. Specifically, while the strategy states that it provides for demonstration of operational view linkages (e.g., activities, process, and capabilities) between the BEA and both component and program architectures, the tools cited do not provide the capability to either align program architectures to component architectures or to align component architectures to the BEA. According to officials from the Air Force, Navy, and DLA, they are using the traceability tool to assess compliance of their programs with the BEA. However, this tool does not allow them to assess their programs’ compliance with their component architectures. In contrast, Army and U.S. Transportation Command officials told us that they do not require the use of the traceability tool to assess compliance of their programs to the BEA or their component architectures. According to BTA officials, they are currently working with the Air Force and Navy to expand this tool to include component architecture alignment capabilities. According to the BMA strategy, the systems view federation is the application of principles, standards, services, and infrastructure to create interoperable and reusable applications and systems. The strategy states that this will be accomplished through the delivery of services within a SOA construct, including an IT infrastructure that will expose reusable functionality to federation members and enable interoperation and interconnection of the business systems and applications that provide this functionality. The strategy notes that this operating environment will be comprised of applications, systems, metadata, and a unifying portal. According to the strategy, this environment will build on existing Enterprise Information Environment Mission Area capabilities and provide the standards, policies, and technology needed to permit BMA services to be shared with the other DOD mission areas. However, the strategy does not describe how this will be accomplished, including respective roles and responsibilities of those involved, the range of services to be shared and developed, and the standards to be used. Moreover, component officials told us that the details behind the strategy’s SOA concepts need to be defined before a systems view federation can be achieved. More specifically: The strategy does not clearly describe how interoperable services will be defined, developed, exposed, and subscribed to. For example, it does not delineate the specific roles and responsibilities of the military departments and defense agencies relative to defining, providing, and employing shared systems and applications. As a result, the military departments and defense agencies may pursue duplicative efforts. This is of particular concern due to the various service orientation activities already under way in the military departments and defense agencies. For example, the Air Force has chartered a Transparency Integrated Product Team to guide their SOA initiatives, and the Navy has established a Transformation Group to support its service orientation activities. This is important because a key aspect of the BMA federation strategy is reusing and leveraging both enterprise-level and component-level systems and applications. The strategy does not relate system federation activities and capabilities to its existing ETP. In particular, while the strategy describes a number of “leave-in-place” pilots (systems and applications) that will be implemented during the next year to demonstrate the use of shared services, it does not describe how these relate to programs in the ETP. This is important because the chief technical officer told us that many of the enterprise-level programs being managed by the BTA and included in the ETP are to evolve into shared services. The strategy does not describe how interface standards will be established and used for obtaining and delivering shared services. Defining and enforcing such standards are important aspects of having services that are interoperable and reusable. According to the BTA chief technical officer, these standards will need to align with the yet-to-be-issued Enterprise Information Environment Mission Area standards. Officials from the Air Force and DISA agreed that more needs to be done to define the infrastructure standards that will enable user subscription to reusable systems and applications, particularly since the military departments and DOD are moving ahead with their own SOA initiatives. The strategy outlines what it refers to as a high-level road map by listing activities, capabilities, products, and services that are to be produced. (See table 2 for this high-level road map.) However, the strategy does not specify the milestones or provide specific completion dates for the activities and related capabilities, products, and services listed in its high-level road map. Instead, the strategy states that the road map began in October 2006 and that milestones will occur at approximately 3-month increments, without identifying, for example, which steps have begun and what is to be accomplished over 3 months for each of the steps. DOD is in the early, formative stage of federating its BEA, with much remaining to be decided and accomplished before it achieves its goals. While the goals, concepts, and related activities; capabilities; products; and services discussed in the strategy have merit and hold promise, the strategy lacks sufficient specificity for it to be executed and, therefore, must be viewed as a beginning. To the department’s credit, it recognizes the need for greater detail surrounding how it will extend (federate) its BEA. One key to making this happen is for the department to implement our prior recommendation for having a BEA development management plan. However, the department has yet to address this recommendation. Until it does, the likelihood of effectively extending the BEA to include the military departments and defense agencies is greatly reduced. To further assist the department in evolving its BEA, we are reiterating our prior recommendation for a BEA development management plan, and augmenting it by recommending that the Secretary of Defense direct the Deputy Secretary of Defense, as the chair of the DBSMC, to task the appropriate DOD organizations, to ensure that this plan describes, at a minimum, how the BMA architecture federation will be governed; how the BMA federation strategy alignment with the DOD EA federation strategy will be achieved; how component business architectures’ alignment with incremental versions of the BEA will be achieved; how shared services will be identified, exposed, and subscribed to; and what milestones will be used to measure progress and results. In written comments on a draft of this report, signed by the DOD Deputy Chief Information Officer and the Deputy Under Secretary of Defense (Business Transformation) and reprinted in appendix II, the department stated that it largely disagrees with our recommendation and added that while the BMA played a leading role in defining the department’s approach to architecture federation and a service-oriented architecture, the impact of the issues discussed in this report goes beyond the scope of the business systems modernization. DOD also stated that any analysis of architecture federation should begin with the department’s approach and not the BMA, since the BMA federation strategy was written as an addendum to an enterprise approach. However, DOD added that it recognizes that our analysis was complicated by the fact that many of the enterprise-level strategy and governance documents, to which the BMA must comply, have yet to be issued. The department also made the following specific comments on the five elements in our recommendation. First, DOD stated that it partially concurs with the element relating to architecture federation. According to DOD, responsibility for developing the policy and guidance regarding how architectures are to be managed within its federated environment lies with the ASD(NII)/CIO; officials acknowledge the current lack of such guidance and stated that this will be addressed with the issuance of the DOD EA federation strategy. As such, the department recommends that we address our recommendation to ASD(NII)/CIO. We agree on the current lack of and the need to develop policies and guidance describing how the federation will be governed; however, our recommendation is not intended to dictate who should develop the policies or guidance for managing architectures within a federated environment. Rather, it is focused on developing plans that describe how the BMA will adopt and implement the policies and guidance relating to federation governance. Second, the department stated that it nonconcurs with the element relating to ensuring alignment with other federation strategies. According to DOD, there is a single architecture federation strategy for the department—the DOD EA federation strategy—and other architecture federation strategies supplement this overarching strategy. As such, it stated that this element of our recommendation is not needed. We disagree. While we do not question the department’s comment about the relationships among the strategies, we believe that this element of our recommendation is needed because its intent is to recognize these relationships by promoting collaboration and ensuring linkages among the various strategies. Third, DOD stated that it nonconcurs with the element relating to component architecture alignment with incremental versions of the BEA. According to DOD, this element has been implemented both in policy and execution to comply with legislative requirements, to include DOD’s development and use of the Architecture Compliance and Requirements Traceability tool. It also added that the Departments of the Air Force, Army, and Navy have mandated the use of this tool to assess compliance of their systems and architectures with the BEA. We disagree. The National Defense Authorization Act for Fiscal Year 2005 includes a requirement for ensuring that all business systems in excess of $1 million be certified as being in compliance with the BEA; the architecture traceability tool provides a mechanism for asserting only system compliance and not component architecture compliance. In addition, according to officials from the Air Force and Army, while they are encouraging the use of the tool for assessing compliance of their systems with the BEA, they have not mandated its use and are not using it to assess compliance of their architectures with the BEA. Moreover, officials from the Air Force further stated that they have not mandated the use of this tool because it does not provide the capability to map the Air Force architecture with the BEA. While we recognize DOD’s efforts to align programs to the BEA, our recommendation focuses on the lack of a discussion in the BMA federation strategy on how component architectures (military departments and defense agencies) will be linked to the BEA, including the lack of component architecture alignment guidance, criteria, and tools. Fourth, the department stated that it partially concurs with the element relating to the identification and management of shared services. According to DOD, each mission area or component is responsible for identifying its own services requirements, and the ASD(NII)/CIO is responsible for defining the overall approach to how these services will be managed. As such, the department recommends that our recommendation be directed to the ASD(NII)/CIO. We agree on the need for guidance describing how shared services will be identified and managed; however, our recommendation is not intended to dictate who should develop the policies or guidance for managing shared services within a federated environment. Rather, it is focused on developing plans that describe how the BMA will adopt and implement the policies and guidance relating to service orientation. As stated in the report, this is important because a key aspect of the BMA federation strategy is to reuse and leverage both enterprise-level and component-level systems and applications. Fifth, DOD stated that it nonconcurs with the element relating to milestones. According to DOD, milestones for gauging progress are and will continue to be monitored in the department’s enterprise transition plan. As such, it stated that it is unclear how the need to describe what milestones will be used relates to the topics in the report. While we have previously recognized that the transition plan provides information on progress on major investments over the last 6 months—including key accomplishments and milestones attained, this element of our recommendation is intended to address the lack of measures (e.g., return on investment of service-oriented architecture service reuse) or specific completion dates for the activities and related capabilities, products, and services that are to be produced for federating the Business Mission Area. To further ensure that our recommendation is properly interpreted and implemented, and to address DOD’s comments about directing the recommendation to the appropriate parties, we have slightly modified our recommendation. We are sending copies of this report to interested congressional committees; the Director, Office of Management and Budget; the Secretary of Defense; the Deputy Secretary of Defense; the Under Secretary of Defense for Acquisition, Technology, and Logistics; the Under Secretary of Defense (Comptroller); the Assistant Secretary of Defense (Networks and Information Integration)/Chief Information Officer; the Under Secretary of Defense (Personnel and Readiness); and the Director, Defense Finance and Accounting Service. We will also make copies available to others on request. In addition, this report will also be available at no charge on our Web site at http://www.gao.gov. If you have any questions concerning this information, please contact me at (202) 512-3439 or by e-mail at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. Our objective was to determine what progress the Department of Defense (DOD) has made in defining its Business Mission Area federation strategy. To accomplish our objective, we reviewed DOD’s Business Mission Area Federation Strategy and Road Map released in September 2006, comparing the strategy and any associated implementation plans with prior findings and recommendations relative to the content of the strategy. In particular, we compared the strategy with our prior recommendations for developing an architecture development management plan to define how the department intends to extend and evolve its business enterprise architecture. In addition, we compared the strategy with our prior findings and the need to ensure that shared systems and applications (i.e., services) are, among other things, defined, developed, exposed, and subscribed to via well-defined and standardized interfaces. Furthermore, we reviewed available information on activities, capabilities, products, and services associated with the federation strategy, such as the Investment Management Framework and the Architecture Compliance and Requirements Traceability User’s Guide. In addition, we interviewed key program officials, including the director of the Business Transformation Agency’s Investment Management Directorate and the chief technical officer and representatives from the Office of the Assistant Secretary of Defense (Networks and Information Integration)/Chief Information Officer, and the Departments of the Air Force, Army, and Navy; the Defense Logistics Agency and Defense Information Systems Agency; and the United States Transportation Command, to obtain an understanding of the steps taken and required to develop and execute the federation strategy. We conducted our work at DOD headquarters offices in Arlington, Virginia, from August 2006 through March 2007 in accordance with generally accepted government auditing standards. In addition to the contact person named above, key contributors to this report were Neil Doherty, Nancy Glover, Michael Holland, Neelaxi Lakhmani (Assistant Director), Anh Le, Jacqueline Mai, and Jennifer Stavros-Turner.
In 1995, we first designated the Department of Defense's (DOD) business systems modernization program as "high risk," and we continue to designate it as such today. To assist in addressing this high-risk area, Congress passed legislation consistent with prior GAO recommendations for Defense to develop a business enterprise architecture (BEA). In September 2006, DOD released version 4.0 of its BEA, which despite improvements over prior versions, was not aligned with component architectures. Subsequently, Defense issued a strategy for extending its BEA to the component military services and defense agencies. To support GAO's legislative mandate to review DOD's BEA, GAO assessed DOD's progress in defining this strategy by comparing it with prior findings and recommendations relevant to the strategy's content. DOD's Business Mission Area federation strategy for extending its BEA to the military departments and defense agencies provides a foundation on which to build and align the department's parent business architecture (the BEA) with its subordinate architectures (i.e., component- and program-level architectures). In particular, the strategy, which was released in September 2006, states the department's federated architecture goals; describes federation concepts that are to be applied; and explains high-level activities, capabilities, products, and services that are intended to facilitate implementation of the concepts. However, the strategy does not adequately define the tasks needed to achieve the strategy's goals, including those associated with executing high-level activities and providing related capabilities, products, and services. Specifically, it does not adequately address how strategy execution will be governed, including assignment of roles and responsibilities, measurement of progress and results, and provision of resources. Also, the strategy does not address, among other things, how the component architectures will be aligned with the latest version of the BEA and how it will identify and provide for reuse of common applications and systems across the department. According to program officials, the department intends to develop more detailed plans to execute the strategy. This means that much remains to be decided and accomplished before DOD will have the means in place to create a federated BEA that satisfies GAO's prior recommendations and legislative requirements. Without one, the department will remain challenged in its ability to minimize duplication and maximize interoperability among its thousands of business systems.
JWST is envisioned to be a large deployable, infrared-optimized space telescope and the scientific successor to the aging Hubble Space Telescope. JWST is being designed for a 5-year mission to find the first stars and trace the evolution of galaxies from their beginning to their current formation, and is intended to operate in an orbit approximately 1.5 million kilometers—or 1 million miles—from the Earth. With a 6.5-meter primary mirror, JWST is expected to operate at about 100 times the sensitivity of the Hubble Space Telescope. JWST’s science instruments are to observe very faint infrared sources and as such are required to operate at extremely cold temperatures. To help keep these instruments cold, a multi-layered tennis-court-sized sunshield is being developed to protect the mirrors and instruments from the sun’s heat. The JWST project is divided into three major segments: the observatory segment, the ground segment, and the launch segment. When complete, the observatory segment of JWST is to include several elements (Optical Telescope Element (OTE), Integrated Science Instrument Module (ISIM), and spacecraft) and major subsystems (sunshield and cryocooler). The hardware configuration created when the Optical Telescope Element and the Integrated Science Instrument Module are integrated, referred to as OTIS, is not considered an element by NASA, but we categorize it as such for ease of discussion. Additionally, JWST is dependent on software to deploy and control various components of the telescope as well as collect and transmit data back to Earth. The elements, major subsystems, and software are being developed through a mixture of NASA, contractor, and international partner efforts. See figure 1 below for an interactive graphic that depicts the elements and major subsystems of JWST. For the majority of the work remaining, the JWST project will rely on three contractors: Northrop Grumman Corporation, Harris Corporation (formerly Exelis), and the Association of Universities for Research in Astronomy’s Space Telescope Science Institute (STScI). Northrop Grumman plays the largest role, developing the sunshield, the OTE, the spacecraft, the cryocooler for the Mid-Infrared Instrument, and integrating and testing the observatory. Northrop Grumman performs most of this work under a contract with NASA, but its work on the Mid-Infrared Instrument (MIRI) cooler is performed under a separate subcontract with the Jet Propulsion Laboratory (JPL). Harris is manufacturing the test equipment, equipping the test chamber, and assisting in the testing of the optics of JWST. Finally, STScI will solicit and evaluate research proposals from the scientific community and will receive and store the scientific data collected, both of which are services that they currently provide for the Hubble Space Telescope. Additionally, STScI is developing the ground system that manages and controls the telescope’s observations and will operate the observatory on behalf of NASA. JWST depends on 22 deployment events—more than a typical science mission—to prepare the observatory for normal operations on orbit. For example, the sunshield and primary mirror are designed to fold and stow for launch to fit within the launch vehicle payload fairing and deploy once in space. Due to its large size, it is nearly impossible to perform deployment tests of the fully assembled observatory, so the verification of deployment elements on JWST is accomplished by a combination of lower level component tests in flight-simulated environments; ambient deployment tests for assembly, element, and observatory levels; and detailed analysis and simulations at various levels of assembly. Complex development efforts like JWST face myriad risks and unforeseen technical challenges, which oftentimes can become apparent during integration and testing. To accommodate these risks and unknowns, projects reserve extra time in their schedules—which is referred to as schedule reserve—and extra money in their budgets— which is referred to as cost reserve. Schedule reserve is allocated to specific activities, elements, and major subsystems in the event there are delays or to address unforeseen risks. Each JWST element and major subsystem has been allocated schedule reserve. When an element or major subsystem exhausts schedule reserve, it may begin to affect schedule reserve on other elements or major subsystems whose progress is dependent on prior work being finished for its activities to proceed. The element or major subsystem with the least amount of schedule reserve determines the critical path for the project. Any delay to an activity that is on the critical path will reduce schedule reserve for the whole project, and could ultimately impact the overall project schedule. Cost reserves are additional funds within the project manager’s budget that can be used to address unanticipated issues for any element or major subsystem and are used to mitigate issues during the development of a project. For example, cost reserves can be used to buy additional materials to replace a component or, if a project needs to preserve schedule reserve, reserves can be used to accelerate work by adding shifts to expedite manufacturing and save time. NASA’s Goddard Space Flight Center (Goddard)—the NASA center with responsibility for managing JWST—has issued procedural requirements that establish the levels of both cost and schedule reserves that projects must hold at various phases of development. In addition to cost reserves held by the project manager, management reserves are funds held by the contractors that allow them to address cost increases throughout development. We have found that management reserves should contain 10 percent or more on the cost to complete a project and are used to address different issues. JWST has experienced significant cost increases and schedule delays. Prior to being approved for development, cost estimates of the project ranged from $1 billion to $3.5 billion with expected launch dates ranging from 2007 to 2011. Before 2011, early technical and management challenges, contractor performance issues, low level cost reserves, and poorly phased funding levels caused JWST to delay work after confirmation, which contributed to significant cost and schedule overruns, including launch delays. The Chair of the Senate Subcommittee on Commerce, Justice, Science, and Related Agencies requested from NASA an independent review of JWST in June 2010. In response, NASA commissioned the Independent Comprehensive Review Panel, which issued its report in October 2010, and concluded that JWST was executing well from a technical standpoint, but that the baseline funding did not reflect the most probable cost with adequate reserves in each year of project execution, resulting in an unexecutable project. Following this review, the JWST program underwent a replan in September 2011, and Congress in November 2011 placed an $8 billion cap on the formulation and development costs for the project. On the basis of the replan, NASA rebaselined JWST with a life-cycle cost estimate of $8.835 billion that included additional money for operations and a planned launch in October 2018. The revised life-cycle cost estimate included a total of 13 months of funded schedule reserve. We have previously found that since the project’s replan in 2011, the JWST project has met its cost and schedule commitments. Most recently, in December 2015, we found that the JWST project was meeting its schedule commitment established at the replan but would soon face some of its most challenging integration and testing. All of the project’s elements and major subsystems were within weeks of becoming the project’s critical path—the schedule with the least amount of reserve—for the overall project. The project had yet to begin 3 of 5 integration and test events, where problems are most often identified and schedules begin to slip, while working to address over 100 identified technical risks and to ensure that potential causes of mission failure were fully tested and understood. We further found that JWST continued to meet its cost commitments, but that larger than planned workforce levels, particularly with the observatory contractor, posed a threat to meeting cost commitments. Additionally, unreliable contractor data could pose a risk to project management. We recommended that the JWST project require contractors to identify, explain, and document anomalies in contractor- delivered monthly earned value management reports. NASA concurred with this recommendation and, in February 2016, directed the contractors to implement the actions stated in the recommendation. We have made recommendations in previous reports with regard to improving cost and schedule estimating, updating risk assessments, and strengthening management oversight. NASA has generally agreed and taken steps to implement a number of our recommendations; however, there are three recommendations that NASA has not fully implemented that could still benefit the JWST project. The project has completed most of its major hardware deliveries including the telescope, instrument module, and the majority of the spacecraft. The project has also made significant advances on the sunshield and cryocooler, two major subsystems that have historically posed challenges. Two of five planned integration efforts are complete and two more are currently underway. The project has used 8 months of its schedule reserve to address technical challenges, but is maintaining its schedule commitment. The project’s schedule reserve, currently 6 months, remains above the Goddard Space Flight Center requirement, as determined by project officials, and is on track with the project’s more conservative internal plan. Integrating the Optical Telescope Element (OTE) and Integrated Science Instrument Module (ISIM) into the combined OTE+ISIM (OTIS) element has taken longer than initially planned and is currently the critical path for the project. As a result, the reserve allocated to the remaining OTIS integration and test work, including a cryovacuum test that takes 93 days to complete, has been reduced from 3 to 2 months. However, risk reduction tests on pathfinder hardware have mitigated issues that would likely have consumed additional schedule reserves during OTIS testing. As we also found in 2015, other JWST elements and major subsystems are within weeks of becoming the project’s critical path, which could further reduce schedule reserves. As we have previously reported, integration and testing is the phase where problems are most likely to be found and schedules tend to slip. Thus, going forward, technical issues encountered are more likely to require critical path schedule reserves to address. The project and its contractors have delivered the majority of the observatory’s hardware, including the telescope, instrument module, and the majority of the spacecraft components. These deliveries also include significant advances on two subsystems that have historically been sources of numerous technical challenges and delays. For example, Northrop Grumman received the sunshield’s final membrane layer from its subcontractor in September 2016, following a delay of over 3 months. This delay in delivery of the final sunshield membrane layer capstones a series of challenges Northrop Grumman and its subcontractor have experienced with the membranes. According to Northrop Grumman officials, their subcontractor struggled to deliver the membranes on time due to a variety of technical factors including additional time needed to seam the last layers with a new lot of material, additional testing requirements, and facility limitations. Technical challenges with the membranes were further complicated by Northrop Grumman’s difficulty in addressing the resulting schedule issues with their subcontractor. After an 18-month delay and numerous technical challenges, the cryocooler compressor assembly was delivered by the subcontractor to the Jet Propulsion Laboratory in July 2015, where it met its acceptance and end-to-end testing requirements. The Jet Propulsion Laboratory then delivered the compressor and electronics assemblies to Northrop Grumman for spacecraft integration and test in May 2016, about 9 weeks ahead of the need date that was based on a revised schedule. Over the last several years, the project has accommodated a series of cryocooler schedule slips by reordering and compressing the Jet Propulsion Laboratory’s test schedule and resequencing the spacecraft integration schedule. Some additional work on the cryocooler will be carried forward into spacecraft integration and test, including completion of vibration verification of refrigerant lines and related hardware that was augmented after cryocooler vibration testing had been completed and confirming bonding resistance of cryocooler hardware when it is integrated with the spacecraft. Jet Propulsion Laboratory officials characterized residual risks as minor and the additional work required at the higher level of integration as typical. JWST project officials further stated that the cryocooler’s performance in testing was excellent and met all of its requirements with healthy margin, and that they are comfortable with the work that will be carried forward into spacecraft integration and test. With most major hardware deliveries complete, the project is primarily focused on integrating and testing the individual elements and major subsystems that compose the observatory. Specifically, the project and Northrop Grumman completed two of five planned integration efforts—the instrument module and the telescope elements—respectively, in March 2016. Two additional integration efforts—integrating the OTE and ISIM into the OTIS element and integrating the spacecraft—are underway, as illustrated in figure 2. The project has consumed 8 of 14 total months of its overall schedule reserve to address technical challenges across elements and major subsystems. Almost 3 months of this reserve have been consumed within the past year. The remaining 6 months of reserve is more than required by Goddard Space Flight Center, as determined by project officials, and on track with the project’s more conservative internal plan—which was set above the Goddard standard at the replan in 2011. The OTIS integration and test work is currently the project’s critical path. However, as we also found in 2015, all of JWST’s elements and major subsystems are within weeks of moving onto the critical path, increasing the likelihood of further use of schedule reserve. While the project plans to use all its available schedule reserve to mitigate issues as it approaches launch, the proximity of each element to the critical path means that the project must prioritize the mitigations when problems occur. See Figure 3 below for a comparison of the schedule reserve held by each element and major subsystem in 2016, compared to last year. Each of JWST’s elements and major subsystems have experienced technical issues that, while reducing their individual schedule reserves as shown in Figure 3 above, also consumed overall project critical path schedule reserve in the past year. OTIS: In August 2016, the project designated one month of critical path schedule reserve to the OTIS element integration effort. According to the project, a portion of the additional time was needed due to the complexity inherent in integrating the telescope and instruments. For example, project officials explained that work progressed slower than planned because of the manual nature of the work and physical reach and access limitations, which created a more serial work flow, particularly with installing over 900 thermal blankets. Additionally, the project addressed concerns regarding contamination control in the clean room. The project took steps to optimize the OTIS integration and test flow to minimize critical path schedule impact. For example, the project conducted some tasks in parallel and added more work shifts to minimize the length of time to complete a task. In addition to addressing integration challenges, a portion of the one month was designated for the chamber operations and preparation work to be conducted at Johnson Space Center in advance of the OTIS cryovacuum test—the final event in the OTIS integration and test effort—based on lessons learned from the integration and test work that has occurred thus far. As a result, the reserve allocated for the OTIS integration and test effort has been reduced from 3 to 2 months. If an issue occurred that required stopping and repeating the cryovacuum test, which is planned to take 93 days, the remaining 2 months of OTIS reserve before the observatory integration and test effort begins, could easily be exhausted and consume reserve allocated for later integration and test work. Figure 4 shows the OTIS element. In an effort to allow OTIS testing to proceed more smoothly and prevent the use of additional schedule reserve, the project and its contractor for OTIS testing, Harris Corporation, have undertaken a series of three risk reduction tests on pathfinder hardware at Johnson Space Center. Optical ground support equipment tests 1 and 2 were completed in June and October 2015, respectively, and the third and final risk reduction test, Thermal Pathfinder, was completed in October 2016, after a 3-month delay to update the processes for cooling down the chamber. The pathfinder work was conducted in parallel to instrument, telescope, and OTIS integration and test activities and was scheduled to conclude in time to begin the OTIS cryovacuum test in early 2017. See figure 5 below. The pathfinder tests have allowed the project and Harris to practice processes and procedures that will be used for the eventual OTIS cryovacuum testing and validate the performance of ground support equipment. This is intended to create a more efficient test flow and proactively address issues before the test on flight hardware commences. For example, the second pathfinder test showed that vibration levels inside the test chamber were too high, and adjustments to the ground support equipment were implemented to address this issue. Additionally, after the second pathfinder test, the project discovered that the adhesive on the back of the tape used throughout the observatory can flake and release particles at cryogenic temperatures, which raised concerns about contamination of sensitive hardware, particularly in the instrument module. Because these issues were discovered during the pathfinder tests, the project was able to address them before OTIS testing where flight hardware could have been affected and prevent the use of additional schedule reserve. Sunshield: The sunshield experienced several issues, which in total reduced schedule reserves by 7 weeks and required adjustments to the integration and test flow at Northrop Grumman to minimize further schedule impacts. For example, in October 2015, the project reported that a piece of flight hardware for the sunshield’s mid-boom assembly was irreparably damaged during vacuum sealing in preparation for shipping. The damaged piece had to be remanufactured, which consumed 3 weeks of schedule reserve. In January 2016, subcontractor manufacturing delays with the individual sunshield membrane layers consumed 2 additional weeks of schedule reserves. Most recently, in June 2016, Northrop Grumman redesigned the membrane tensioning system, which allows the sunshield to unfold and maintain its shape when deployed. According to contractor officials, during previous mass reduction efforts, the pulley walls in the system were thinned out; however, when tested under higher loads, the weaker walls allowed a cable to become pinched during a test. Because the observatory now has sufficient mass margin, the system was redesigned to thicken the walls. The redesign of the system consumed an additional 2 weeks of schedule reserves, and the project is tracking further schedule threats related to conducting an anti- corrosion chemical treatment of the system’s parts and investigating a deployment test anomaly. To accommodate the 2-week slip and minimize use of additional schedule reserves, Northrop Grumman adjusted its planned sunshield and spacecraft integration and test flow. For example, delivery of the structures that support the sunshield will now be delayed from September to December 2016. Sunshield integration needs to be completed by September 2017 to avoid delaying integration and testing of the completed observatory. Figure 6 shows the full scale sunshield templates used for testing the deployment of the sunshield. Spacecraft: The spacecraft consumed 4 weeks of schedule reserves due to a variety of technical challenges, particularly with the electronics and propulsion components. For example, 2 weeks of reserves were consumed in January 2016 due to a deficient test cable which caused a vibration test anomaly, following the late delivery of spacecraft electronics components from the supplier. According to program officials, a series of assembly issues on the propulsion system consumed another 2 weeks of reserves. For example, installation and welding of the spacecraft propellant lines was more complicated and took more time than expected. Additionally, Northrop Grumman discovered that, during spacecraft check out testing, components in the propulsion system that are used to measure fuel levels had been damaged due to operator error. The damaged parts will require replacement and the project and Northrop Grumman continue to track this issue as a schedule threat. Due to the technical issues experienced, the reserves allocated for spacecraft integration and test have been reduced from 3 to 2 months. However, significant integration and test work remains. Specifically, Northrop Grumman will complete integration of the cryocooler electronics and compressor assemblies and spacecraft electronics panels into the spacecraft bus structure, conduct the first comprehensive system test in 2016, and begin integration testing in early 2017. Figure 7 shows the spacecraft. In an effort to provide additional schedule margin, the JWST project has been working with the launch vehicle provider on the possibility of expanding the potential launch window. According to program officials, at its former expected mass and due to its planned trajectory and its relationship to the moon, JWST could not launch for a period before and after the solstices. This means if it misses the planned October 2018 launch date, the project would have to wait until February 2019 for another opportunity. However, prior mass reduction efforts have made the observatory lighter and resulted in more flexibility in launch dates near the winter solstice. JWST is one of the most technologically complex science projects NASA has undertaken. In addition to the previously noted challenges that have reduced schedule reserves, much significant and technologically challenging work remains to be completed in the 2 years remaining before launch, which could further erode schedule reserves if problems occur. As integration and testing moves forward, the project will need to be able to reduce a significant amount of risk and address technical challenges in a timely manner to stay on schedule. The project maintains a list of risks—currently with 73 items—that need to be tested and mitigated to an acceptable level in the remaining 2 years before launch. According to the project, approximately 25 of these risks are not likely to be closed until the conclusion of the observatory integration and test effort—just prior to project launch. In some cases, the project will determine that no further mitigations are feasible and whether to accept any residual risk. Many of these risks relate to the project’s numerous deployments or single point failures. According to project officials, in comparison with other NASA unmanned spaceflight missions, JWST has a greater number of and more complex deployments. The extent of these deployments—which are necessary because the telescope and sunshield must be stowed for launch to fit within the launch vehicle payload fairing—means the telescope could fail to operate as planned in an extensive number of ways. For example, the four release mechanisms that hold the spacecraft and OTE together for launch are key deployments, as well as potential single point failures, for the project. Once in space, these mechanisms are to activate and release to allow the OTE to separate from the spacecraft. If the mechanisms fail to deploy, or release prematurely, mission failure could occur. The project has redesigned the mechanisms due to excessive shock when performing the release function, and efforts to qualify the new design and mitigate as much of the risk of failure or premature release as possible are ongoing. According to project officials, there are over 100 different single point failure modes across hundreds of individual items in the observatory, nearly half of which involve the deployment of the sunshield. To ensure that all deployment mechanisms are ready for flight, Northrop Grumman—with participation from the project—is conducting a series of deployment reviews using standards developed by the contractor and employed on a variety of systems with large, complex, or high risk mission deployments. These reviews are tailored to the more rigorous requirements of JWST and provide a phased series of assessments throughout the mission’s development. The project is also seeking a waiver from NASA’s Office of Safety and Mission Assurance for its numerous single point failures throughout the observatory, including those related to the sunshield. The approval of critical single point failures requires justification from the project including sound engineering judgement, supporting risk analysis, and implementation of measures to mitigate the risk to acceptable levels. According to project officials, this approach is consistent with other high-priority NASA missions, which require the most stringent design and development approach that NASA takes to ensure the highest level of reliability and longevity on orbit. Additionally, program officials noted that NASA leadership has been well informed of JWST’s potential single point failures, and that the items covered in the waiver are well understood and expected. JWST also faces a number of risks related to software integration. According to NASA’s Independent Verification and Validation office— which independently examines mission critical software development for most NASA programs and projects, the project is unique among spaceflight projects in the amount and complexity of the software required to operate it and the number of developers contributing the software. For example, while most science programs or projects have two to four software developers, JWST has eight. This creates inherent cost and schedule risk for the project. The project is tracking a number of software- related risks throughout the observatory. However, NASA’s Independent Verification and Validation officials stated that they believe that JWST is on track to continue meeting its software milestones, but the testing that lies ahead—when the different components are integrated—will be a challenge. Going forward, NASA’s Independent Verification and Validation office will be focusing its efforts on the software related to the ongoing OTIS integration and test work. Our prior work has shown that integration and testing is the phase in which problems are most likely to be found and schedules tend to slip. For a complex project such as JWST, this risk is magnified. Now that the project is well into its complex integration and test efforts, events are more sequential in nature and there are few opportunities to mitigate issues in parallel. According to contractor officials, opportunities for schedule work-arounds and recovery options, which have preserved some schedule reserves in the past, are diminishing. Thus, going forward, technical issues encountered during integration and test are more likely to require critical path schedule reserves to address, as has recently been observed in the OTIS integration and test effort. Program and Project Reserves Project reserves are those costs that are expected to be incurred but have not yet been allocated to a specific project cost element. A project’s reserves may be held at the project level, program level, and mission directorate level. The project’s reserves are divided and portions are controlled by the project manager, the program and mission directorate. Though the project spent $42.8 million more than planned for fiscal year 2016, project officials managed JWST within its allocated budget for the fifth consecutive year since the 2011 replan. The project estimates that it will carry over into fiscal year 2017 about half of the amount it projected. NASA officials attribute the reduced amount of carry over to their emphasis on maintaining schedule, which has required additional dollars to meet technical challenges. As in past years, the project used a portion of its cost reserves to address technical challenges, such as completing OTIS integration. The project also received additional program-level cost reserves in fiscal year 2016. For example, program-held reserves were used to offset cryocooler costs in fiscal year 2016 for the work remaining. Our analysis indicates that these additional costs will not result in exceeding the project’s overall cost commitment. However, NASA has already committed the majority of its fiscal year 2017 program-held reserves to address increased costs on the Northrop Grumman contract. As a result, NASA will have diminished project and program reserves to address technical and other challenges that may occur. Though 89 percent of the work on the Northrop Grumman contract has been completed, the primary threat to JWST continues to be the ability of Northrop Grumman, the observatory contractor, to control its costs and decrease its workforce. For the past 32 months, Northrop Grumman’s actual workforce has exceeded its projections and is not expected to fall to under 300 full-time equivalents until the spring of 2017. Based on its projections at the beginning of the fiscal year, Northrop Grumman exceeded its total fiscal year 2016 workforce monthly projections by about 37 percent. Figure 8 below illustrates the difference between the workforce levels that Northrop Grumman projected at the beginning of fiscal year 2016 and its actual workforce levels for that period. Northrop Grumman’s workforce has declined slightly in fiscal year 2016 when compared to fiscal year 2015, but, on average, Northrop Grumman was above its projections by 165 full-time equivalents each month in fiscal year 2016. In the beginning of the fiscal year, Northrop Grumman’s workforce was close to projected levels. However, in the latter half of the year, the contractor increased its workforce instead of decreasing as projected. For example, in July 2016, Northrop Grumman’s workforce was 593 instead of 284 as projected. Almost half of the July increase was due to a need for more workers to work on observatory integration and test activities. According to project officials, Northrop Grumman continues to maintain overall higher workforce levels than planned because the project has asked them to prioritize schedule when addressing technical issues that arise to minimize impacts to the project’s schedule. The project, however, has also communicated the need to reduce the workforce size, including holding frequent discussions with the contractor on workforce planning. As Northrop Grumman hardware schedule milestones are completed, NASA expects the contractor to reduce its workforce accordingly. While Northrop Grumman did consume fiscal year 2016 reserves to address technical issues and challenges, it was able to operate within budget for fiscal year 2016. However, in July 2016, Northrop Grumman submitted its first cost overrun proposal to NASA since the replan in 2011. The costs associated with Northrop Grumman’s higher workforce levels is the primary reason for their overrun proposal. The project had independently forecasted that Northrop Grumman costs would be higher than anticipated as the contractor dealt with technical issues and cost increases for critical hardware deliveries such as the sunshield and spacecraft. Currently, the project is evaluating Northrop Grumman’s proposal, including the impact on program cost reserves, and does not expect to conclude negotiations before early 2017. While Northrop Grumman is developing and manufacturing large portions of the observatory, as well as integrating and testing the observatory, NASA relies on other entities for other support, components, and observatory operations. For example, Harris Corporation is manufacturing the test equipment used to test the OTIS flight hardware. Since most of the work performed by these entities is complete or has a significantly lower contract value than the observatory contractor, it is unlikely that it will result in JWST exceeding its cost commitments as a result. For example; Harris Corporation: Projected costs will likely overrun the contract when it completes the work performed under this contract in December 2016, but our analysis shows that it will not cause JWST to exceed its cost commitment. In 2017, Harris will perform additional work on JWST, but that work will be performed through a Goddard Space Flight Center support contract rather than a contract specifically for JWST. Jet Propulsion Laboratory: In fiscal year 2016, the laboratory overran its cost for work related to JWST and the project used budget reserves to cover additional costs. Overall, in developing and testing the cryocooler system, the Jet Propulsion Laboratory costs grew about 258 percent and consumed a disproportionate amount of JWST reserves. Because most of the work remaining for JPL is complete—testing of the spare cryocooler remains—it is unlikely that cryocooler costs will have any significant impact on JWST cost reserves in the future. Space Telescope Science Institute: The STScI has generally performed work within planned costs. To gain further insight on its costs, NASA has an ongoing effort to require the institute to provide earned value management data on its JWST contract. STScI has submitted its proposal on how it will meet this new requirement and a contract modification is expected to be executed in January 2017. We requested comments from NASA, but agency officials determined that no formal comments were necessary. NASA provided technical comments, which were incorporated as appropriate. We are sending copies of the report to NASA’s Administrator and interested congressional committees. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. Should you or your staff have any questions on matters discussed in this report, please contact me at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Our objectives were to assess the extent to which the James Webb Space Telescope (JWST) project is (1) managing technological issues and development challenges to maintain its committed schedule and (2) meeting its committed cost levels and managing its workforce plans. To assess the extent to which the JWST project is managing technological issues and development challenges to maintain its committed schedule, we reviewed project and contractor schedule documentation, and held interviews with program, project, and contractor officials on the progress made and challenges faced building and integrating the different components of the observatory. We examined and analyzed monthly project status reports to management to monitor schedule reserve levels and usage and potential risks and technical challenges that may impact the project’s schedule, and to gain insights on the project’s progress since our last report in December 2015. Further, we attended flight program reviews at the National Aeronautics and Space Administration (NASA) headquarters on a quarterly basis, where the current status of the program was briefed to NASA headquarters officials outside of the project. We examined selected individual risks for elements and major subsystems from monthly risk registers prepared by the project to understand the likelihood of occurrence and impacts to the schedule based on steps the project is taking to mitigate the risks. We examined test schedules and plans to understand the extent to which risks will be mitigated. Furthermore, we interviewed project officials at Goddard, contractor officials from the Northrop Grumman Corporation, the Harris Corporation, the Jet Propulsion Laboratory, and the Association of Universities for Research in Astronomy’s Space Telescope Science Institute concerning technological challenges that have had an impact on schedule, and the project’s and contractor’s plans to address these challenges. To assess the extent to which the JWST project is meeting its committed cost levels and managing its workforce plans, we reviewed and analyzed program, project, and contractor data and documentation and held interviews with officials from these organizations. We reviewed JWST project status reports on cost issues to determine the risks that could impact cost. We analyzed contractor workforce plans against workforce actuals to determine whether contractors’ are meeting their workforce plans. We monitored and analyzed the status of program, and project cost reserves in current and future fiscal years to determine the project’s financial posture. We examined and analyzed earned value management data from two of the project’s contractors to identify trends in performance, whether tasks were completed as planned and likely estimates at completion. Our work was performed primarily at NASA headquarters in Washington, D.C.; Goddard Space Flight Center in Greenbelt, Maryland; Northrop Grumman Corporation in Redondo Beach, California; and the Space Telescope Science Institute in Baltimore, Maryland. We also conducted interviews at the Independent Verification and Validation facility in Fairmont, West Virginia; the Harris Corporation, Chester, Maryland; and the Jet Propulsion Laboratory in Pasadena, California. We conducted this performance audit from February 2016 to December 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Cristina Chaplain, (202) 512-4841 or [email protected]. In addition to the contact named above, Richard Cederholm, Assistant Director; Karen Richey, Assistant Director; Jay Tallon, Assistant Director; Molly Traci, Assistant Director; Marie P. Ahearn; Brian Bothwell; Laura Greifner; Katherine Lenane; Jose Ramos; Carrie Rogers; and Roxanna Sun made key contributions to this report.
JWST is one of NASA's most complex and expensive projects, at an anticipated cost of $8.8 billion. Now in the midst of significant integration and testing that will last the 2 remaining years until the planned October 2018 launch date, the JWST project will need to continue to address many challenges and identify problems, some likely to be revealed during its rigorous testing. The continued success of JWST hinges on NASA's ability to anticipate, identify, and respond to these challenges in a timely and cost-effective manner to meet its commitments. Conference Report No. 112-284, accompanying the Consolidated and Further Continuing Appropriations Act, 2012, included a provision for GAO to assess the project annually and report on its progress. This is the fifth such report. This report assesses the extent to which JWST is (1) managing technological and developmental challenges to meet its schedule commitments, and (2) meeting its committed cost levels and managing its workforce plans. To conduct this work, GAO reviewed monthly JWST reports, reviewed relevant policies, conducted independent analysis of NASA and contractor data, and interviewed NASA and contractor officials. The National Aeronautics and Space Administration's (NASA) James Webb Space Telescope (JWST) project is still operating within its committed schedule while in its riskiest phase of development, integration and test. Most hardware deliveries and two of five major integration and test efforts have been completed. Two other integration and test efforts are underway, with the final effort to begin in fall 2017. JWST used about 3 months of schedule reserve since GAO's last report in December 2015. For example, the project used one month of schedule reserve to address delays in integrating the Optical Telescope Element and the Integrated Science Instrument module, due to the complexity of this effort. The project's remaining 6 months of reserve is more than required by Goddard Space Flight Center requirements, as determined by project officials. The figure below shows JWST's elements and major subsystems, the schedule reserve remaining for each, and the critical path—the schedule with the least amount of reserve. JWST is one of NASA's most technologically complex science projects and has numerous risks and single points of failure, which need to be tested and understood before launch. The project also faces a number of risks related to the observatory software. Looking ahead, the project will likely need to consume more reserves for its complex integration and test efforts. JWST is meeting its cost commitments despite technical and workforce challenges. Although the project used $42.8 million more than planned for fiscal year 2016, it is maintaining spending within the levels dictated by the 2011 replan. NASA continues to emphasize that maintaining schedule is the priority, which resulted in the use of the fiscal year 2016 cost reserves to meet technical challenges. Also, as GAO previously found in December 2015, the observatory contractor has continued to maintain a larger workforce for longer than planned in order to address technical issues. For example, in 2016, the observatory contractor averaged 165 full-time equivalents more than projected to address technical issues while minimizing the impact on schedule. The contractor submitted a proposal to NASA this summer to cover cost overruns, which was the first such proposal since the replan in 2011. GAO is not making recommendations in this report. GAO has made recommendations in previous reports, to which NASA has generally agreed and taken steps to implement. There are three recommendations that NASA has not fully implemented that could still benefit the JWST project.
The tax administration system that collects about $2 trillion in revenues each year is critically dependent on a collection of obsolete computer systems developed by the IRS over the last 40 years. IRS envisions a future in which its tax processing environment will be virtually paper-free, and up-to-date taxpayer information will be readily available to IRS employees to respond to taxpayer inquiries. To accomplish this, IRS embarked on its ambitious BSM program. BSM involves the development and delivery of a number of modernized business, data, and core infrastructure projects that are intended to provide improved and expanded service to taxpayers as well as IRS internal business efficiencies. Recognizing the long-term commitment needed to solve the problem of obsolete computer systems, Congress set up a special BSM account in fiscal year 1998 to fund IRS’s systems modernization efforts. IRS initiated CADE as part of BSM, to modernize the agency’s outdated and inefficient data management system. IRS also sees this project as the corporate data source enabling future customer service and financial management applications. CADE is therefore IRS’s linchpin modernization project. In light of the projects that depend on CADE, as well as the many interrelationships that are to exist among CADE and IRS’s modernized applications and among CADE and current IRS applications, the agency must manage this critical project effectively. Without CADE, the business systems modernization program cannot succeed. IRS’s attempts to modernize its aging computer systems span several decades. This long history of continuing delays and design difficulties led to our designating IRS’s Tax Systems Modernization program, BSM’s predecessor, as a high-risk area in 1995. During the mid-1990s we reported on several technical and management weaknesses associated with Tax Systems Modernization, a program that began in the 1980s. These weaknesses related to incomplete or inadequate strategic information management practices; immature software development capability; incomplete systems architecture, integration planning, system testing, and test planning practices; and the lack of an effective organizational structure to consistently manage and control systems modernization organizationwide. We made a series of recommendations for correcting these weaknesses and limiting modernization activities until they were corrected. IRS subsequently discontinued the program after the agency had spent about $4 billion without receiving expected benefits. In fiscal year 1999, IRS launched the BSM program. IRS contracted with CSC as its prime systems integration services contractor for systems modernization, helping it design new systems and identify other contractors to develop software and perform other tasks. In our reviews of IRS’s BSM expenditure plans, we have identified numerous deficiencies in the BSM program, including a continuation of the weaknesses noted above. Also, a consistent challenge for IRS has been to make sure that the pace of systems acquisition projects does not exceed the agency’s ability to manage them. In May and November 2000, we reported that projects were in fact getting ahead of the modernization management capacity that needed to be in place to manage them effectively. In February 2002 we reported that such an imbalance was due to IRS’s first priority and emphasis being on getting the newer, more modern systems—with their anticipated benefits to taxpayers—up and running. In so doing, however, management controls had not been given equal attention and thus had not kept pace. This emphasis on new systems added significant cost, schedule, and performance risks that escalate as a program advances. Moreover, these risks increased as IRS moved forward because of interdependencies among projects, and the complexity of associated workload activities to be performed increased dramatically as more systems projects were built and deployed. In addition, we identified other deficiencies in the BSM program, including the need to establish processes that meet the level 2 requirements of the SEI’s Software Acquisition Capability Maturity Model, and to improve modernization management controls and capabilities, such as those related to configuration management, risk management, enterprise architecture implementation, human capital strategic management, integrated program scheduling, and cost and schedule estimating. In response to our recommendations, IRS has made important progress. First, significant progress has been made in establishing the modernization management controls needed to effectively acquire and implement information technology systems. For example, IRS has invested incrementally in its modernization projects; defined a systems life cycle management methodology, which IRS refers to as the Enterprise Life Cycle; developed and is using a modernization blueprint, commonly called an enterprise architecture, to guide and constrain its modernization projects; and established processes that meet the level 2 requirements of the SEI’s Software Acquisition Capability Maturity Model. Second, IRS has made progress in establishing the infrastructure systems on which future business applications will run. For example, IRS has delivered elements of the Security and Technology Infrastructure Release to provide the hardware, software, and security solutions for modernization projects. IRS has also built an enterprise integration and test environment that provides the environment and tools for multiple vendors associated with a release to perform integration and testing activities. Third, it has delivered certain business applications that are producing benefits today. These applications include Customer Communications 2001, to improve telephone call management, call routing, and customer self-service applications; Customer Relationship Management Examination, to provide off-the- shelf software to IRS revenue agents to allow them to accurately compute complex corporate transactions; and Internet Refund/Fact of Filing, to improve customer self-service by providing to taxpayers via the Internet instant refund status information and instructions for resolving refund problems. Fourth, IRS took steps to align the pace of the program with the maturity of IRS’s controls and management capacity, including reassessing its portfolio of planned projects. Nevertheless, IRS continued to face challenges to fully develop and implement its modernization management capacity. Last June we reported that IRS had not yet fully implemented a strategic approach to ensuring that it has sufficient human capital resources for implementing BSM, nor had it fully implemented management controls in such areas as configuration management, estimating costs and schedules, and employing performance-based contracting methods. We made several recommendations to address those issues. Our analysis has shown that weak management controls contributed directly to the cost, schedule, and/or performance shortfalls experienced by most projects. Given that the tasks associated with those projects that are moving beyond design and into development are by their nature more complex and risky and that IRS’s fiscal year 2004 BSM expenditure plan supports progress toward the later phases of key projects and continued development of other projects, systems modernization projects likely will encounter additional cost and schedule shortfalls. IRS will need to continue to assess the balance between the pace of the program and the agency’s ability to manage it. Based on IRS’s expenditure plans, BSM projects have consistently cost more and taken longer to complete than originally estimated. Table 1 shows the life cycle variance in cost and schedule estimates for completed and ongoing BSM projects. These variances are based on a comparison of IRS’s initial and revised cost and schedule estimates to complete initial operation or full deployment of the projects. As the table indicates, the cost and schedule estimates for full deployment of the e-Services project have increased by just over $86 million and 18 months, respectively. In addition, the estimated cost for the full deployment of CADE release 1 has increased by almost $37 million, and project completion has been delayed by 30 months. In addition to the modernization management control deficiencies discussed above, our work has shown that the increases and delays were caused, in part, by inadequate definitions of systems requirements. As a result, additional requirements have been incorporated into ongoing projects. increases in project scope. For example, the e-Services project has changed significantly since the original design. The scope was broadened by IRS to provide additional benefits to internal and external customers. cost and schedule estimating deficiencies. IRS has lacked the capability to effectively develop reliable cost and schedule estimates. underestimating project complexity. This factor has contributed directly to the significant delays in the CADE release 1 schedule. competing demands of projects for test facilities. Testing infrastructure capacity is insufficient to accommodate multiple projects when testing schedules overlap. project interdependencies. Delays with one project have had a cascading effect and have caused delays in related projects. These schedule delays and cost overruns impair IRS’s ability to make appropriate decisions about investing in new projects, delay delivery of benefits to taxpayers, and postpone resolution of material weaknesses affecting other program areas. Producing reliable estimates of expected costs and schedules is essential to determining a project’s cost-effectiveness. In addition, it is critical for budgeting, management, and oversight. Without this information, the likelihood of poor investment decisions is increased. Schedule slippages delay the provision of modernized systems’ direct benefits to the public. For example, slippages in CADE will delay IRS’s ability to provide faster refunds and respond to taxpayer inquiries on a timely basis. Delays in the delivery of modernized systems also affect the remediation of material internal management weaknesses. For example, IRS has reported a material weakness associated with the design of the master files. CADE is to build the modernized database foundation that will replace the master files. Continuing schedule delays will place resolution of this material weakness further out into the future. In addition, the Custodial Accounting Project is intended to address a financial material weakness and permit the tracking from submission to disbursement of all revenues received from individual taxpayers. This release has yet to be implemented, and a revised schedule has not yet been determined. Finally, the Integrated Financial System is intended to address financial management weaknesses. When IRS submitted its fiscal year 2003 BSM expenditure plan, release 1 of the Integrated Financial System was scheduled for delivery on October 1, 2003. However, it has yet to be implemented, and additional cost increases are expected. Given the continued cost overruns and schedule delays experienced by these BSM projects, IRS and CSC launched internal and independent assessments during 2003 of the health of BSM as whole, as well as CADE. Table 2 describes these assessments. The IRS root cause analysis, PRIME review, and the Office of Procurement assessment revealed several significant weaknesses that have driven project cost overruns and schedule delays, and also provided a number of actionable recommendations for IRS and CSC to address the identified weaknesses and reduce the risk to BSM. Deficiencies identified are consistent with our prior findings and include low program productivity levels, ineffective integration across IRS, and insufficient applications and technology engineering. As noted, CADE release 1 has experienced significant reported cost overruns and schedule delays throughout its life cycle, and has yet to be delivered. SEI’s independent technical assessment of CADE pointed to four primary factors that have caused the project to get off track and resulted in such severe cost and schedule impairments: (1) the complexity of CADE release 1 was not fully understood; (2) the initial business rules engine effort stalled; (3) both IRS and PRIME technical and program management were ineffective in key areas, including significant breakdowns in developing and managing CADE requirements; and (4) the initially contentious relationship between IRS and PRIME hindered communications. SEI also warned that CADE runs the risk of further trouble with later releases due to unexplored/unknown requirements; security and privacy issues that have not been properly evaluated (e.g., online transactions are different from the way IRS does business today); dependence on an unproven business rules engine software product; and the critical, expensive, and lengthy business rules harvesting effort that has not yet been started. SEI offered several recommendations to address current CADE issues and reduce project risk in the future. Based on these assessments, IRS identified a total of 46 specific issues for resolution in the following six areas, and developed a BSM action plan comprising individual action plans to address each issue: Organization and Roles. Immediate steps are needed to clarify IRS/PRIME roles and responsibilities and clearly define decision- making authorities. Key Skills & Strengthening the Team. Strengthened skills and capabilities are needed in such key areas as project management and systems engineering. Technology–Architecture & Engineering. More focus is needed to improve current systems architecture integration. Technology–Software Development Productivity & Quality. Improvements in product quality and productivity are essential to strengthening software delivery performance. Acquisition. Contracting and procurement practices require major streamlining to improve overall contract management. CADE. Delivery of CADE release 1 will require aggressive focus and attention, and a business rules engine solution requires additional evaluation. These 46 issue action plans were assigned completion dates and an IRS or PRIME owner was assigned to take the lead in implementing each plan. IRS and PRIME each also assigned a senior-level executive to drive the execution of the issue action plans, identify and help mitigate implementation hindrances or roadblocks, and ensure successful completion of all planned actions. To assess the efficacy of the BSM action plan, MITRE was tasked with conducting an independent analysis and provided feedback to IRS on the effectiveness of the specific issue action plans to address the associated findings/recommendations and correct any problems found. IRS has reported making steady progress with implementing the BSM action plan. According to the IRS BSM program office, as of late January 2004, 27 of the 46 issue action plans have been completed. Examples of completed actions include (1) making business owners and program directors accountable for project success; (2) assigning teams to investigate and resolve problem areas on key projects such as CADE, the Integrated Financial System, and e-Services; (3) aligning critical engineering talent to the most critical projects; (4) increasing the frequency of CADE program reviews; and (5) issuing a firm fixed-price contracting policy. Significant further work remains to complete implementation of the remaining 19 open issue action tasks. Bain & Company—which conducted the independent review of PRIME—has been hired to facilitate the implementation of various issue action plans within the Organization and Roles challenge area, while IRS has also contracted with SEI to conduct further periodic reviews of the CADE project. Additionally, the IRS Oversight Board recently issued a report on its own independent analysis of the BSM program, which made several observations and recommendations that are consistent with those discussed here. IRS has conducted an analysis of this report to reconcile the board’s recommendations with those that are currently being addressed in the BSM action plan. As a result, IRS plans to open two additional issues and action plans to address (1) rationalizing and streamlining oversight of the BSM program, and (2) determining and maintaining a manageable portfolio of projects. IRS expects to complete the majority of the BSM action plan by end of April of this year, and fully implement any remaining open actions by the end of the calendar year. Further, during 2003, the Treasury Inspector General for Tax Administration performed several reviews related to management of the BSM program and for specific BSM projects. These reviews identified several issues, including those related to compliance with the defined management and project development processes, full implementation of disciplined project testing processes and procedures, IRS’s cost and schedule estimation process, and contract management. IRS management reaffirmed their commitment to fully implement key management and project development processes. IRS’s multibillion-dollar BSM program is critical to agency’s successful transformation of its manual, paper-intensive business operations and fulfilling its restructuring activities. The agency has made important progress in establishing long-overdue modernization management capabilities and in acquiring foundational system infrastructure and some applications that have benefited the agency and the public. However, our reviews, those of the Treasury inspector general, and the recently completed internal and independent assessments of the BSM program clearly demonstrate that significant challenges and serious risks remain. IRS acknowledges this and is acting to address them. To successfully address these challenges and risks and to modernize its systems, IRS needs to continue to strengthen BSM program management by continuing efforts to balance the scope and pace of the program with the agency’s capacity to handle the workload, and institutionalize the management processes and controls necessary to resolve the deficiencies identified by the reviews and assessments. Commitment of appropriate resources and top management attention are critical to resolving the identified deficiencies. In addition, continuing oversight by the Congress, OMB, and others, as well as ongoing independent assessments of the program, can assist IRS in strengthening the BSM program. Meeting these challenges and improving performance are essential if IRS and the PRIME contractor are to successfully deliver the BSM program and ensure that BSM does not suffer the same fate as previous IRS modernization efforts. Mr. Chairman, this concludes my statement. I would be pleased to respond to any questions that you or other members of the subcommittee may have at this time. For information about this testimony, please contact me at (202) 512-3317 or by e-mail at [email protected]. Individuals making key contributions to this testimony include Bernard R. Anderson, Michael P. Fruitman, Timothy D. Hopkins, and Gregory C. Wilshusen. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Internal Revenue Service (IRS) has been grappling with modernizing its computer systems for many years. IRS's current program, commonly referred to as Business Systems Modernization (BSM), began in fiscal year 1999; about $1.4 billion has been reported spent on it to date. While progress has been made, the program continues to face significant challenges and risks. In recognition of these risks, IRS and a contractor recently completed several comprehensive assessments of BSM, including one of its Customer Account Data Engine (CADE) project, which is to modernize the agency's outdated data management system. At the request of the Subcommittee on Oversight, House Committee on Ways and Means, GAO's testimony will summarize (1) GAO's prior findings and recommendations, along with those of the recent assessments; and (2) actions IRS has taken or plans to take to address these issues. Prior GAO reviews have disclosed numerous modernization management control deficiencies that have contributed to reported cost overruns and schedule delays. Costs and completion dates for ongoing projects have grown from their initial estimates. Reasons for such delays include inadequate definition of systems requirements, increases in project scope, and underestimation of project complexity. These impair IRS's ability to make future systems investment decisions and delay delivery of benefits to taxpayers. GAO has made a series of recommendations focusing on stronger program management--and limiting modernization activities until such management practices were in place. IRS has made important progress in implementing management controls, establishing infrastructure, delivering certain business applications, and balancing the pace of the program with the agency's ability to manage it. Nevertheless, IRS needs to further strengthen BSM program management, including fully implementing modernization management controls in such areas as cost and schedule estimating. The recent BSM assessments identified many weaknesses, consistent with prior GAO findings, that contributed to the cost overruns and schedule delays, and offered recommendations to address them. IRS has responded by identifying 46 discrete issues to be resolved; according to the agency, 27 of these have been completed. Commitment of appropriate resources, top management attention, and continuing oversight by Congress and others are critical to the success of BSM.
Justice is responsible for collecting criminal debt and has delegated operating responsibility to its FLUs within all of Justice’s U.S. Attorneys’ Offices (USAO). Justice’s Executive Office for United States Attorneys (EOUSA) provides administrative and operational support, including support required for debt collection, to the USAOs. The criminal debt collection process typically begins when an offender is convicted and a judge orders the offender to pay a fine or restitution. In addition to Justice, the U.S. Courts and their probation offices may assist in collecting monies owed. AOUSC provides national standards and promulgates administrative and management guidance, including standards and guidance required for debt collection, to the various U.S. judicial districts. The courts typically receive payments of fines and deposit them in the Crime Victims Fund. Both the courts and certain FLUs receive restitution payments, which are disbursed to the applicable victims or entities as directed by the courts. In our 2001 report, we noted that collection of outstanding criminal debt was a long-standing problem, with many of the problems cited similar to problems that we reported on back in 1985. Aside from the question of whether those convicted had earnings or assets with which to pay fines or restitution, a number of other factors make collection difficult. These factors, listed below, remain applicable today: Criminals may not be willing to comply with the law. Forcing compliance is difficult because criminals are already convicted offenders who may be serving time in prison or may have been deported. Imprisoned offenders have limited earning capacity, making potential collections limited. A significant amount of time may pass between offenders’ arrest and sentencing, thus affording opportunities for offenders to hide fraudulently obtained assets in offshore accounts, shell corporations, family members’ names and accounts, or other ways. MVRA requires that assessment of restitution be based on actual loss and not on an offender’s ability to pay. Therefore, depending on the nature of the crime, collection of the total restitution assessed may be unrealistic from the outset. According to 18 U.S.C. section 3613 (2000), most criminal debts must remain on the books for 20 years plus the period of the offender’s incarceration and cannot be “written off” prior to the expiration of those periods unless the debtor is deceased or the court approves a petition of remission filed by USAO. Even if Justice determines that certain criminal debts, or a large percentage of them, are not collectible, these debts must remain on the books. To provide detailed information on the amount and growth of criminal debt, including specific amounts related to white-collar financial fraud, we obtained information from Justice on the amount of (1) outstanding criminal debt as of the end of fiscal years 2000, 2001, and 2002 and (2) related collections for each of these 3 fiscal years. This information has not been audited. However, we reviewed the trends in the amounts and growth of overall criminal debt for these fiscal years. Specifically, we analyzed trends in major components of the debt and reasons for the changes and compared them to similar trends that we had assessed and discussed in our 2001 report. We also discussed the trends with appropriate Justice officials and compared overall criminal debt information provided by those officials to information in existing published Justice reports, when available. We worked with Justice officials to identify the criminal debt categories in Justice’s information systems that Justice considers to be white-collar financial fraud. We obtained an understanding of the key automated information systems Justice uses to track criminal debt amounts and related collections through discussions with Justice officials and review of pertinent documents that describe the systems. We also discussed with Justice officials and obtained appropriate documentation supporting reliability testing performed by Justice on these systems. To evaluate actions Justice has taken to implement our previous recommendations, we obtained and reviewed pertinent Justice documents, including correspondence to certain congressional committees related to our 2001 report, relevant memorandums, summaries of work performed, proposed actions, revised policy and procedures manuals, and other related materials and correspondence. We discussed the documents provided by Justice and the status of implementation of each of the recommendations with an appropriate Justice official. We conducted our review from March 2003 through mid-December 2003 in accordance with U.S. generally accepted government auditing standards. We requested written comments on a draft of this report from the Attorney General or his designated representative. Justice’s letter is reprinted in appendix I. Justice reported an unaudited amount of total outstanding criminal debt of about $25 billion as of September 30, 2002, almost double when compared to Justice’s unaudited amount from 3 years earlier. This marked increase over the 3-year period continued a significant upward trend that started in fiscal year 1996, the year MVRA was enacted. Given MVRA’s requirement that restitution be assessed regardless of the criminal’s ability to pay, the significant increase in the balance of reported uncollected criminal debt was not unexpected. According to Justice’s unaudited records, collections relative to outstanding criminal debt averaged about 7 percent for fiscal years 1995 through 1999 and decreased to an average of about 4 percent for fiscal years 2000, 2001, and 2002. For each of these latter 3 fiscal years, according to Justice’s unaudited records, about two-thirds or more of the criminal debt was related to white-collar financial fraud. As shown in figure 1, Justice’s reported criminal debt outstanding totaled approximately $16 billion, $20 billion, and $25 billion as of September 30, 2000, 2001, and 2002, respectively. Criminal debt owed consists primarily of fines and federal and nonfederal restitution related to a wide range of criminal activities, including domestic and international terrorism, organized drug trafficking, firearms crimes, and white-collar financial fraud. According to Justice officials, nonfederal restitution stemming from MVRA’s mandatory restitution requirements was the major component of criminal debt outstanding as of September 30, 2000, 2001, and 2002. Justice’s unaudited records showed that nonfederal restitution accounted for about 70 percent of total reported criminal debt as of September 30, 2002. This proportion is generally consistent with what we found for fiscal year 1999, which we reported in our 2001 report. At that time, about 66 percent of outstanding criminal debt as of September 30, 1999, was nonfederal restitution debt. According to Justice’s unaudited records, collections of outstanding debt did not increase, and in fact fell slightly, over this 3-year period. As shown in figure 1, collections for fiscal years 2000, 2001, and 2002, totaled about $1 billion, $800 million, and $800 million, respectively, or an average of about 4 percent of outstanding debt for the 3 years. In our 2001 report, we reported that criminal debt collection averaged about 7 percent for fiscal years 1995 through 1999. As shown in figure 2, a major component of criminal debt was debt related to white-collar financial fraud, which, according to Justice’s unaudited records, totaled about $11 billion, $13 billion, and $17 billion as of September 30, 2000, 2001, and 2002, respectively, or about two-thirds or more of overall outstanding criminal debt at the end of each of these years. White-collar financial fraud debts included fines and restitution related to fraud against business institutions, antitrust violations, bank fraud and embezzlement, bankruptcy fraud, computer fraud, consumer fraud, federal procurement fraud, federal program fraud, health care fraud, insurance fraud, and tax fraud. Also included were debts related to corporate financial fraud, which, as of the date of completion of our fieldwork, consisted of fines and restitution related to advance fee schemes, commodities fraud, securities fraud, and other investment fraud. According to Justice’s unaudited records, as was the case for criminal debt overall, the major component of white-collar financial fraud debt for each of the 3 fiscal years was nonfederal restitution, which accounted for about 80 percent of the white-collar financial fraud debt as of September 30, 2002. As shown in figure 3, according to Justice’s unaudited records, collections of debt related to white-collar financial fraud, while increasing, have remained low when compared to total white-collar financial fraud debt outstanding. Such collections totaled about $300 million, $400 million, and $600 million for fiscal years 2000, 2001, and 2002, respectively. Justice has not taken timely action to address all of the recommendations we made to it in July 2001, which were designed to improve the effectiveness and efficiency of Justice’s criminal debt collection processes. Specifically, Justice has not taken action along with certain other agencies to develop a strategic plan for criminal debt collection, which was a key recommendation. In addition, since July 2001, Justice has completed action on only 7 of the 13 interim recommendations that were made to stem the growth of reported uncollected criminal debt while Justice and certain other agencies worked to develop the strategic plan. Actions to address 4 of these 7 recommendations were completed about 2 years after we made the recommendations, and actions to address the remaining 6 interim recommendations are still in process. One indication of Justice’s level of resolve to expeditiously improve collection success is the timeliness of a required response to the Congress. Heads of federal agencies are required to submit a written statement within an established time frame to certain congressional committees on actions taken in response to recommendations we make in a report. Justice did not submit its statement until 2 years after the date of our report, after we had made inquiries about the status of the statement and Justice’s progress in implementing our recommendations. In our 2001 report, we emphasized that addressing the long-standing problems in the collection of outstanding criminal debt required a united strategy among the entities involved with the collection process. In addition to identifying a need to work closely with the U.S. Courts to coordinate criminal debt collection efforts, we stated that leveraging OMB’s and Treasury’s current central agency roles could result in effective oversight of the collection of criminal debt. For example, a primary function of OMB as a central agency is to evaluate the performance of executive branch programs and serve as a catalyst for improving interagency cooperation and coordination. In its central role, OMB is also responsible for reviewing debt collection policies and activities. We also noted that Treasury has a central agency role in implementing certain provisions of the Debt Collection Improvement Act of 1996, which would allow it to help Justice identify the types of delinquent criminal debt that would be eligible for reporting and referral to Treasury’s offset program (TOP). To promote a united approach to collecting outstanding criminal debt, we recommended that the Attorney General work with the Director of AOUSC, the Director of OMB, and the Secretary of the Treasury in the form of a joint task force to develop a strategic plan to improve criminal debt collection processes and establish an effective coordination mechanism among all entities involved in these processes. We stated that the strategy should address managing, accounting for, and reporting criminal debt. We also stated that the strategy should include (1) determining an approach for assessing the collectibility of outstanding amounts so that a meaningful allowance for uncollectible criminal debts can be reported and used for measuring debt collection performance and (2) having OMB work with Justice and certain other executive branch agencies to ensure that these entities report and/or disclose relevant criminal debt information in their financial statements and subject such information to audit. It is important to reemphasize the need for assessing the collectibility of outstanding criminal debt amounts and establishing and reporting a meaningful allowance for uncollectible debts. According to Justice, about 74 percent or more of reported criminal debt amounts in its records for fiscal years 2000, 2001, and 2002 were in suspense, meaning that no collection action was being taken on the debt because it had been determined that reasonable efforts to collect were unlikely to be effective. However, we emphasized in our 2001 report that Justice had not performed an analysis of its criminal debt to estimate how much of the outstanding amounts was uncollectible and had not established an allowance for uncollectible debt for amounts that were due to the federal government. We specifically noted that since the collectibility of outstanding criminal debt had not been assessed, the amount in suspense did not represent a reliable estimate of the amount that was expected to be uncollected. We also discussed the importance of subjecting criminal debt amounts to independent audit, which would include assessments of internal controls and compliance with applicable laws and regulations related to the criminal debt process. Further, we noted that proper accounting for, reporting, and managing of criminal debt would heighten management awareness and ultimately result in a more effective collection process. As of the completion date of our fieldwork, Justice had not begun to develop, in conjunction with AOUSC, OMB, and Treasury, a written strategic plan for collection of outstanding criminal debt. In December 2001, Justice’s EOUSA sent letters to AOUSC, OMB, and Treasury citing our 2001 report on criminal debt collection and our recommendation to form a joint task force to develop a strategic plan to improve criminal debt collection and establish effective coordination between each of the involved entities. According to a Justice official, the purpose of the letters was to solicit representatives from each of the agencies to assist in this effort. However, this initial attempt to form the joint task force was unsuccessful. The official stated that on account of our recent inquiries about this recommendation, EOUSA plans to make another attempt to contact appropriate officials at the other agencies. The Justice official also stated that both EOUSA and AOUSC have to address certain internal deficiencies, including systems problems, before they can effectively develop a strategic plan. As previously mentioned and discussed in more detail in our 2001 report, addressing the long-standing problems in the collection of outstanding criminal debt—including fragmented processes and lack of coordination— will require a united strategy among the entities involved with the collection process. The participation and cooperation of each of these entities, including AOUSC, OMB, and Treasury, are critical to the formation of the joint task force and development of a strategic plan, as recommended. Justice cannot require these agencies to participate in the joint task force and development of the strategic plan. However, Justice is a key federal agency responsible for the collection of criminal debt and, as such, is accountable for enlisting all affected agencies’ support in a sustained effort to develop a strategic plan and cohesive approach for managing, accounting for, reporting, and improving the collection of such debt. It is important to note that Justice has begun to get criminal debts into TOP. According to a Justice official, during the first part of fiscal year 2003, Justice piloted the TOP process for criminal debts in four districts, resulting in inclusion of about $700,000 of criminal debts in TOP by the end of fiscal year 2003. This official told us that with the progress of the pilot program, the debt referral program was expanded in August 2003 to all eligible FLUs. According to the official, as of December 5, 2003, 20 of the 43 districts eligible to submit criminal debts to TOP had either added criminal debts to TOP or were in the process of identifying criminal debts and sending out 60-day notices to debtors demanding payment, which is necessary before a debt can be sent to TOP. As of December 3, 2003, FLUs had submitted 549 criminal debts, with a total outstanding balance of approximately $1.4 million, to TOP, and Justice anticipates many more debts will be included in TOP in the next few months. Given that TOP has resulted in over $1 billion in nontax debt collections from payment offsets governmentwide during each of fiscal years 2000, 2001, and 2002, it will be important for Justice to continue to emphasize submitting debts to TOP as an integral part of its criminal debt collection efforts, as such action could increase potential collections. We recognized at the time of our 2001 report that the development of a strategic debt collection plan with other agencies that have a key role to play in criminal debt collection would take time. Therefore, to help improve collections and stem the growth in reported uncollected criminal debt while Justice worked with other agencies to establish the task force and develop the strategic plan for criminal debt collection, we made 13 recommendations for interim action to the Attorney General. As shown in table 1, Justice has completed action on 7 of these recommendations. Four of the 7 recommendations, however, were not completed until about 2 years after we made them. Actions to address the 6 remaining recommendations are still in process. Since the interim recommendations largely focused on policies and procedures, it will be important that they be effectively implemented once they are established. The status of each of our 13 interim recommendations is discussed below. Recommendations for which corrective actions have been completed are discussed first. In May 2003, Justice’s EOUSA took action to address recommendation 2 by issuing the Prosecutor’s Guide to Criminal Monetary Penalties. The guide contains information on the obligations and responsibilities of criminal prosecutors and others involved in the criminal debt collection process to increase the likelihood that victims of crime are compensated for their losses. EOUSA has provided the guide to all entities involved in the collection of criminal debt at Justice, including prosecuting attorneys, investigating case agents, and FLU staff. The guide is also available on Justice’s intranet. This guide requires prosecutors to ensure that the responsible FLU receives all available information on a defendant’s financial resources by (1) forwarding a copy of the presentence report to the FLU; (2) providing the FLU with any information or pleading in the government’s file on a defendant’s financial resources not obtained through the grand jury investigation; (3) filing a motion asking the court to order disclosure to the FLU of any information gathered by the grand jury, and to make the disclosure as soon as it is ordered; and (4) ordering the transcript of any hearing in which a defendant’s financial resources were discussed, such as a bond hearing, and forwarding the transcript to the FLU. According to a Justice official, case agents work directly with the prosecuting attorneys and share any information, including financial information, with the prosecutors before a judgment on a case is issued. The Justice official noted that once a judgment in a criminal case is issued, it generally is sent from the courts to the criminal prosecutor within 1 week, and once the prosecutor receives the judgment, the financial information is shared with the responsible FLU. In September 2003, EOUSA completed actions to address recommendation 4 by issuing a memorandum to all Financial Litigation Supervisors and FLUs requiring that each FLU establish policies and procedures to ensure that all FLU cases are effectively prioritized and enforced pursuant to a priority system. The memorandum contained guidance, including factors to consider in assigning priority codes (e.g., the debtor’s assets and income, type of debtor, type of debt, type of victim, complexity of the case); default priority codes based on the amount of the debt; information on setting review dates; and implementation procedures, including a list of fields and codes to be used in Justice’s new system for tracking debts, and milestone dates for completion of the review and prioritization of all existing cases. According to a Justice official, the guidance for establishing a priority system is fairly general to allow each district to set its own priorities based on the type of debt typically collected at that district. According to the memorandum, effective October 1, 2003, all new judgments should be prioritized using the priority system; by December 31, 2003, FLUs should review all pre-existing judgments with an original debt balance of $1 million or more; by March 31, 2004, FLUs should review all pre-existing judgments with an original debt balance of $100,000 to $999,999; and by December 31, 2004, to the extent resources permit, FLUs should review all remaining pre-existing judgments. Although priority-setting is currently a manual process, once Justice’s new system has been updated, which according to the Justice official is scheduled for May 2004, the priority codes will be incorporated into the new automated priority process. In January 2002, EOUSA completed actions to address recommendation 5 by sending a memorandum to all U.S. Attorneys, all First Assistant U.S. Attorneys, and all Civil Chiefs, concerning our 2001 report. The memorandum generally noted the findings in the report and encouraged each district to review its policies and procedures for collecting and enforcing criminal debt in light of the report. The memorandum also offered the assistance of the districts’ Financial Litigation Program Manager in implementing or improving criminal debt collection policies and procedures. EOUSA has also worked to reinforce current policies and procedures by developing and providing training materials to its staff involved in debt collection. Moreover, EOUSA’s periodical DebtBeat, which is available to all USAOs, private counsel, and client agencies, regularly provides updates on debt collection issues, including any modifications to debt collection policies and procedures. EOUSA used the May 2003 prosecutor’s guide to respond to recommendation 6. Specifically, the guide requires FLUs to issue a demand letter for payment of a debt for each case opened within 30 days of the judgment. To facilitate collection, the guide further specifies that the demand letter should inquire whether the defense attorney will continue to represent the defendant for collection purposes. EOUSA also used the May 2003 guide to address recommendation 7. As stated in our 2001 report, FLUs lacked procedures for performing certain debt collection actions in a timely manner, including (1) entering cases into their tracking systems; (2) filing liens; (3) sending demand, delinquent, or default letters; and (4) performing asset discovery work. The prosecutor’s guide provides a specific time frame for performing each of these actions. It requires that for each case opened for collection, the responsible FLU should, at a minimum, take the following steps within 30 days of the judgment: open and record the case; initiate the filing of a lien where possible; issue a demand letter; and conduct an initial assessment of the prioritization and collectibility of the case, which would include performing asset discovery work. The guide also states that the responsible FLU should provide notice to the defendant of any fine or restitution payment that is found to be delinquent or in default within 10 working days after the delinquency or default occurs. To address recommendation 11, according to a Justice official, Justice annually assesses each district based on established collection goals for that district. The official stated that because of the differences in size of caseloads and types of cases worked, it does not make sense for EOUSA to establish nationwide goals. Instead, each district establishes and is measured against its own collection goals. To assess debt collection performance and compliance with applicable guidance and regulations at each district, EOUSA uses (1) a goals-setting package, which includes instructions for completing goals based on each district’s workload and collections; (2) a state-of-the-district report, which provides 3 years of detailed district-specific collection statistics to allow each USAO to evaluate its own collection activities based on historical experience; and (3) a compliance checklist, which provides FLUs with an opportunity to review their current policies and procedures to ensure compliance with EOUSA requirements. According to the Justice official, EOUSA works with each district to prepare these tools annually, and each district uses them to determine needed actions to improve criminal debt collection. Justice has also assessed its FLUs’ human capital resources and training to respond to recommendation 13. According to a Justice official, although EOUSA did not prepare a formal written assessment of FLUs’ human capital resources, EOUSA has assessed FLU human capital resources and determined that FLUs are understaffed and need more staff or contractors to perform debt collection activities. However, to date, EOUSA has not been successful in requesting additional staff for debt collection. Nevertheless, the Justice official noted that EOUSA did receive funding, beginning in fiscal year 2002, through the Office for Victims of Crime to support asset investigations in criminal debt collection cases. The Office for Victims of Crime provides 50 percent of the funding for asset investigators, with the remaining 50 percent to be funded through the Three Percent Fund. Therefore, half the asset investigators’ time may be spent on postsentencing criminal fine and restitution debt collection cases. The asset investigators’ services are available through the Financial Litigation Investigator Program. Prior to fiscal year 2002, these investigators were limited to working solely on civil debts because funding for their time was exclusively through the Three Percent Fund. Justice is in the process of taking corrective actions to address the remaining 6 recommendations. Specifically, actions taken to address parts 1 and 2 of recommendation 1 are still in process. In July 2003, EOUSA rolled out to all USAOs a new version of its collections case tracking system. The new system allows for the tracking of all debt components in a single record for each debtor, thus eliminating the need to open multiple records to track collections for a single debtor. Also, many of the required fields, such as collection types and agency program codes, have been coded to eliminate duplicative data entry by the user. However, additional upgrades, such as automatic payment posting to debtor accounts, are still under development and are scheduled to be completed during fiscal year 2004. According to a Justice official, complete implementation of this recommendation depends on AOUSC upgrading its automated criminal debt tracking systems. The Justice official stated that full reconciliation of payment information between FLUs and the courts will not be possible until AOUSC fully implements its new Civil/Criminal Accounting Module system, which, according to the official, is not expected to be completed until 2005. Actions to address recommendations 3, 8, 9, and 10 are also in process at Justice. We emphasized in our 2001 report the importance of documenting key steps in the criminal debt collection process to help ensure that all opportunities for collection were being pursued. We also noted that because FLUs do not consistently assess interest and penalties, the reported amounts do not accurately represent how much total principal, interest, and penalties are due. We stressed that failure to assess interest and penalties reduces the amount that can be recovered and passed along to victims or the federal government and eliminates a tool designed to give debtors an incentive to make prompt payments. According to a Justice official, the Financial Litigation Working Group, which Justice established in February 2002 in part to address our recommendations, will continue to work toward fully implementing these open recommendations. Finally, Justice is in the process of taking corrective actions to address recommendation 12. According to a Justice official, EOUSA’s system programmers are currently developing automated tracking of debtor status from incarceration through probation. EOUSA plans to have such automated tracking available during fiscal year 2004. In addition, according to the official, EOUSA is working to determine how to allocate outstanding criminal debt amounts between amounts likely to be collected and amounts not likely to be collected, which is critical for effective use of debt collection resources. The long-standing problems in the collection of outstanding criminal debt—including fragmented processes and lack of coordination—continue, as there is no united strategy among key entities involved with the collection process. According to Justice’s unaudited records, during fiscal years 2000, 2001, and 2002, criminal debt increased significantly, but collections decreased slightly. Until Justice takes actions to fully implement our previous recommendations to it to improve criminal debt collection efforts, including forming a joint task force with AOUSC, OMB, and Treasury and developing a strategic plan to improve the criminal debt collection processes, the effectiveness of criminal fines and restitution as a punitive tool may be diminished, and Justice’s management processes and procedures will not provide adequate assurance that offenders are not afforded their ill-gotten gains and that innocent victims are compensated for their losses to the fullest extent possible. Therefore, we reaffirm those recommendations made to Justice from our 2001 report on which Justice has not completed action. In written comments on a draft of this report, which are reprinted in appendix I, Justice’s EOUSA said that the draft report did not fully reflect EOUSA efforts to improve the criminal debt collection process by implementing the recommendations from our 2001 report and by taking additional actions that go beyond the specific recommendations made in that report. We disagree. As stated in this report, Justice has not taken timely action to address all of the July 2001 recommendations, which were designed to improve the effectiveness and efficiency of Justice’s criminal debt collection processes. Most important, from the standpoint of resolving key jurisdictional issues and functional responsibilities, Justice has not taken action along with certain other agencies to develop a strategic plan for criminal debt collection. Of the 13 interim recommendations made to stem the growth of reported uncollected criminal debt while Justice and the other agencies worked to develop the strategic plan, Justice completed action on only 7. Actions to address 4 of these 7 recommendations were completed about 2 years after we made them, and actions to address the remaining 6 interim recommendations are still in process. In support of its view that it has taken extensive implementation action, EOUSA referred to a June 16, 2003, letter and stated that excerpts from this letter were included with its comments. We are not aware of a June 16, 2003, letter; however, all of the excerpts contained in EOUSA’s comments are included verbatim in Justice’s July 15, 2003, letter to the Congress regarding actions EOUSA had taken in response to recommendations we made in our 2001 report. Justice submitted this letter 2 years after the date of our 2001 report, and after we had made inquiries about the status of Justice’s response to the Congress regarding Justice’s implementation of our recommendations. In accordance with 31 U.S.C. 720, the head of a federal agency is required to submit a written statement of the actions taken on our recommendations to the Senate Committee on Governmental Affairs and to the House Committee on Government Reform not later than 60 calendar days from the date of the report and to the House and Senate Committees on Appropriations with the agency's first request for appropriations made more than 60 calendar days after that date. Moreover, as stated in this report, to evaluate actions Justice has taken to implement our previous recommendations, we obtained and reviewed pertinent Justice documents, including correspondence to certain congressional committees related to our 2001 report. As such, in drafting our report, we fully considered each of EOUSA’s assertions that were contained in the previously mentioned excerpts from its letter. Our responses to specific parts of these excerpts appear in appendix I. EOUSA also stated that our draft report failed to address its comments on our 2001 report that responsibility for accounting for and reporting criminal debt does not rest with Justice. In our 2001 report, we stated that Justice’s comments related to accounting for and reporting of criminal debt, plus the lack of response from AOUSC regarding its position on this issue, illustrated the need for cooperation and coordination in the criminal debt collection area. Thus, we emphasized the need for the development of the previously mentioned strategic plan to improve the criminal debt collection processes and establishment of an effective mechanism to coordinate efforts among all entities involved in these processes. We noted that the strategic plan should address managing, accounting for, and reporting of criminal debt. It is important to note that, as stated in our 2001 report, both Treasury and OMB agreed that criminal debt should be reported on either Justice’s or the U.S. Court’s financial statements. Finally, EOUSA stated that our 2001 report focused on asset investigation resources and that EOUSA has put particular emphasis in this area. EOUSA also stated that even though it has fully implemented more than half of our recommendations, with the remaining ones nearing completion, collections have decreased slightly since our 2001 report. As previously stated, actions to address 4 of the 7 fully implemented recommendations were completed about 2 years after our 2001 report, and actions to address the 6 remaining recommendations are still in process. Since these interim recommendations largely focused on policies and procedures, it is important that they be effectively implemented once they are established, and it will likely take some time for collection results to be realized from full implementation. Moreover, as stated in our report, the debt collection strategy to be developed by the task force should include determining the collectibility of outstanding criminal debt amounts so that a meaningful allowance for uncollectible debt can be reported and used for measuring debt collection performance. We also stated that proper accounting for, reporting of, and managing of criminal debt would heighten management awareness and ultimately result in a more effective collection process. Identifying debts with the best prospects for collection will allow more efficient targeting of limited collection resources in order to maximize collections. As agreed with your office, unless you announce its contents earlier, we plan no further distribution of this report until 30 days after its issuance date. At that time, we will send copies of this report to the Chairmen and Ranking Minority Members of the Subcommittee on Financial Management, the Budget and International Security, Senate Committee on Governmental Affairs, and the Subcommittee on Government Efficiency and Financial Management, House Committee on Government Reform. We will also provide copies to the Attorney General, the Director of the Administrative Office of the U.S. Courts, the Director of the Office of Management and Budget, and the Secretary of the Treasury. Copies will then be made available to others upon request. The report will also be available at no charge on GAO’s Web site, at http://www.gao.gov. If you have any questions about this report, please contact me on (202) 512- 3406 or Kenneth R. Rupar, Assistant Director, on (214) 777-5714. Other key contributors to this report are Linda K. Sanders and Michael D. Hansen. The following are GAO’s comments on the Department of Justice’s (Justice) Executive Office for United States Attorneys’ (EOUSA) letter dated January 23, 2004. 1. See “Agency Comments and Our Evaluation” section. 2. Our 2001 report responded to a request that we review the federal government’s collection of criminal debt, primarily fines and restitutions. As such, our review resulted in numerous recommendations to Justice and the Administrative Office of the U.S. Courts (AOUSC) aimed at addressing the fragmented processes and lack of coordination among those entities involved in debt collection and at helping to improve collections and stem the growth in reported uncollected criminal debt. For this report, we were requested to examine the extent to which Justice has acted on the recommendations we made in our 2001 report to improve criminal debt collection. We acknowledge in our report Justice’s use of the Prosecutor’s Guide to Criminal Monetary Penalties to address recommendations 2, 6, and 7. Our follow-up work did not focus on certain areas covered by the guide, including charging defendants and negotiating plea agreements, because such issues were not part of the scope of our 2001 report or of this report. 3. We acknowledge in this report that the Financial Litigation Working Group was established in part to address the recommendations we made in our 2001 report and will continue to work toward fully implementing certain open recommendations. 4. Writing legislative proposals that will remove barriers to enforcement of criminal debts, such as clarifying that payment schedules set forth in court orders are minimum payments due and do not prohibit enforcement of the total amount of the obligation imposed, is consistent with our 2001 recommendation to AOUSC to revise the language in the Judgment in a Criminal Case forms to clarify that payment terms established by judges are minimum payments and should not prohibit or delay collection efforts. Although we did not recommend such action to Justice, its initiative to address this concern makes sense. 5. We acknowledge in our report that Justice provided the prosecutor’s guide to all entities involved in criminal debt collection at Justice, and we credit the guide with addressing recommendation 2 by requiring prosecutors to ensure that responsible Financial Litigation Units (FLU) receive all available information on a defendant’s financial resources. 6. We acknowledged and explained in our report EOUSA’s State of the District Report and Compliance Checklist in relation to actions taken to address recommendation 11. 7. We are aware of EOUSA’s hiring of an independent contractor to perform a requirements analysis for a new debt collection system. However, as of the completion date of our fieldwork, according to an EOUSA official, Justice was in the process of reviewing the contractor’s work, and we could not obtain a copy of the contractor’s report until the review was complete. Therefore, we are unable to comment on the results of the contractor’s review. However, we acknowledge in our report EOUSA’s new version of its collections case tracking system, including its recent and planned upgrades designed to reduce the data entry responsibilities of FLUs. 8. We provide in our report detailed information on Justice’s efforts to add criminal debts to the Treasury Offset Program. 9. Our July 2001 report addressed many factors that have had an impact on the effectiveness of the criminal debt collection process. That report resulted in numerous recommendations to Justice and AOUSC to improve debt collection. Justice has taken action to enhance its asset investigations resources. In our discussion of Justice’s efforts to address recommendation 13, we acknowledge EOUSA’s receipt of funding, beginning in fiscal year 2002, through the Office for Victims of Crime to support asset investigations in criminal debt collection cases. 10. Assets identified by outside investigators, combined with fervent debt collection efforts, could result in potential collections on outstanding criminal debts. If investigators found assets for approximately $50 million of the $150 million of criminal debts referred to them, the potential collection rate for such assets might well exceed the average collection rates being experienced by Justice. 11. Although we are aware of EOUSA’s contract for credit bureau report services, the issue of credit bureau report services did not directly relate to any particular recommendation made in our 2001 report. Therefore, the contract was not addressed in this report. However, we agree that credit bureau report services, if properly applied, can enhance FLUs’ ability to assess a debtor’s ability to pay. 12. We acknowledge in our discussion of EOUSA’s actions to address recommendation 5, that EOUSA has worked to reinforce policies and procedures by developing and providing training materials to its staff involved in debt collection. 13. In our 2001 report, we recommended that Justice perform an analysis to assess whether FLUs’ human capital resources are adequate to effectively perform their collection activities. In our discussion of Justice’s actions to address recommendation 13 in this report, we acknowledge that EOUSA has assessed FLU human capital resources and determined that FLUs need more staff or contractors to perform debt collection activities. We further state that, to date, EOUSA has not been successful in requesting additional staff for debt collection. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
In July 2001, GAO reported that outstanding criminal debt, as reported in Department of Justice (Justice) statistical reports, had increased from about $6 billion as of September 30, 1995, to more than $13 billion as of September 30, 1999. Although some of the key factors that contributed to this increase were beyond Justice's control, GAO concluded--after accounting for such factors--that Justice's criminal debt collection processes were inadequate. Accordingly, in the 2001 report, GAO made 14 recommendations to Justice to improve the effectiveness and efficiency of its criminal debt collection processes. To follow up on the 2001 report, GAO was asked to (1) provide information on the amount and growth of criminal debt for fiscal years 2000 through 2002, (2) examine the extent to which Justice has acted on GAO's previous recommendations, and (3) review Justice's collection efforts for selected criminal debt cases related to white-collar financial fraud. This report addresses the first two objectives; GAO will report separately on its ongoing work to address the third. Justice reported an unaudited amount of total outstanding criminal debt of about $25 billion as of September 30, 2002, almost double when compared to Justice's unaudited amount from 3 years earlier. This increase, which was not unexpected, continued a trend that began in fiscal year 1996. A primary factor contributing to the increase is a mandate that requires restitution to be assessed regardless of the ability of the offender to pay. As we reported in 2001, collections as a percentage of outstanding criminal debt averaged about 7 percent for fiscal years 1995 through 1999. As indicated in Justice's unaudited records, because collections decreased slightly while debt increased, collections as a percentage of outstanding debt declined to an average of about 4 percent for fiscal years 2000, 2001, and 2002. For each of these 3 fiscal years, according to Justice's unaudited records, about two-thirds or more of criminal debt was related to white-collar financial fraud. Justice has made progress responding to GAO's 2001 recommendations related to criminal debt collection, but not to the degree that had been expected. A key recommendation in 2001 was for Justice, the Administrative Office of the U.S. Courts, the Office of Management and Budget, and the Department of the Treasury to work as a joint task force to develop a strategic plan that addresses managing, accounting for, and reporting criminal debt. As of mid-December 2003, Justice had not yet worked with these other agencies to develop this plan. We also made 13 interim recommendations to Justice to help improve the efficiency and effectiveness of criminal debt collection while the strategic plan was being developed. Since July 2001, Justice has completed action on 7 of these recommendations; actions to address 4 of the 7 were completed about 2 years after GAO made them. Actions to address the remaining 6 interim recommendations are in process. According to Justice, GAO did not fully recognize its progress in improving the criminal debt collection process. GAO said that it had given Justice full credit for its efforts to implement the 2001 recommendations, as well as for some related efforts outside the scope of those recommendations. GAO noted, however, that Justice had not yet led efforts to resolve key jurisdictional issues and functional responsibilities. While acknowledging that Justice was laying the foundation for improved collections by establishing policies and procedures in response to certain of the interim recommendations, GAO noted that it is important that the new policies and procedures be effectively implemented and that it will likely take some time for collection results to be realized from full implementation. Until Justice takes action to fully implement these recommendations, Justice's management processes and procedures will not provide adequate assurance that offenders are not afforded their ill-gotten gains and that innocent victims are compensated for their losses to the fullest extent possible.
In 1993, in response to calls from local communities and professional associations representing them—such as the U.S. Conference of Mayors—to provide more and better coordinated federal support for brownfields, EPA began providing grants to select communities to conduct assessments of the potential contamination at brownfield sites. EPA got involved because lenders’ and developers’ fear that contamination would lead to long and costly cleanups was often one of the first barriers to redeveloping these sites. But communities wanted more help than EPA could provide with its assessment grants. Therefore, in July 1996, EPA created the Interagency Working Group on Brownfields, with staff from more than 20 federal agencies, and the Interagency Steering Committee, with senior management representatives from these same agencies. According to EPA, both groups were created to provide a forum for federal agencies to exchange information and develop a coordinated national strategy for brownfields that focused on both environmental and redevelopment issues. Subsequently, while developing this strategy, EPA asked the members of the Working Group to identify specific actions that the federal agencies would take to support brownfield redevelopment and the funding they would obligate for these activities for fiscal years 1997 and 1998. EPA collected this information from the agencies, which totaled more than 100 action items and plans to invest about $469 million—$304 million predominantly in grants and another $165 million in loan guarantees. On May 13, 1997, the administration publicly announced the planned financial assistance and more than 100 action items as part of its Brownfield National Partnership Action Agenda initiative, along with the goals of improving agencies’ coordination of their brownfield activities and achieving specific economic benefits for communities. One of the major actions under the initiative is the brownfield Showcase Communities project. In fiscal year 1998, the Working Group selected 16 pilot communities from eligible applicants. The agencies intended for these communities to demonstrate to other communities how they can use federal support to successfully clean up and redevelop brownfields. According to EPA’s Director of Outreach and Special Projects, the agencies hope to develop models of how 16 very different types of communities, such as those in rural, urban, and coastal areas, successfully worked with federal agencies to redevelop their unique brownfield properties. The 10 agencies in our review reported that they provided about $413 million —$272 million primarily through grants and $141 million in HUD loan guarantees in financial assistance for brownfield activities in fiscal years 1997 and 1998. EPA, HUD, and EDA within the Department of Commerce were responsible for $409 million, or 99 percent of this assistance. Table 1 below outlines the planned federal investment for brownfields as stated in the Partnership Agenda, and actual obligations and loan guarantees for fiscal years 1997 and 1998 that agencies reported went for brownfield-related activities. In making this dollar comparison, it is important to understand the basis of the planned assistance stated for the three primary agencies in the Partnership Agenda, illustrated in the middle column of the table above. About one-half of the $304 million planned assistance was funding that the agencies could actually commit to obligate on brownfields during the 2-year period. For example, EPA decided to obligate $125 million because it received this amount of new appropriations available for brownfields during this time frame. While HUD’s planned financial assistance through grants as stated in the Partnership Agenda was for $155 million, agency brownfield managers clarified that the agency could only commit to spend $25 million because it received this amount of appropriations for its new Brownfield Economic Development Initiative (BEDI) program. Most of the remaining $130 million was an estimate of the amount of fiscal year 1997 grant funds that communities might choose to use for brownfields under the agency’s Community Development Block Grant program. HUD could not commit to spend a specific amount of grant funds on brownfields because under this program, grant recipients have broad discretion in how they can use the funds. EDA brownfield managers explained that this was also the case for the $17 million presented as the agency’s planned financial assistance in the Partnership. EDA could not commit to spend these funds on brownfields through its economic development grant program because the agency responds to locally identified economic development needs, which may or may not include brownfield redevelopment needs. As a result, EDA cannot estimate the amount of brownfield-related funding assistance that communities will request or the agency will award in any given fiscal year. In comparing these amounts in the Partnership Agenda to the amounts of actual brownfield obligations and loan guarantees the agencies achieved, as illustrated in the right-hand column of the table, we determined that agencies may have obligated more than they reported but not all of these obligations were a result of the Partnership initiative. HUD can document the amount of BEDI funds obligated for brownfields, since this program is dedicated to such activity. However, HUD does not separately track the amount of its Community Development Block Grant funds spent at brownfield sites. Consequently, we could not determine the exact amount of federal funds the Partnership agencies were using on brownfields. More specifically, for the Partnership Agenda, HUD estimated that grant recipients might use up to $100 million during fiscal year 1997 in Community Development Block Grant funds on brownfield-related activities. On the basis of some survey and anecdotal information from grant recipients, HUD brownfield managers estimated that recipients probably spent more than $100 million on brownfields through this program during that year. But the Department cannot demonstrate the extent to which communities used grant funds for brownfields because communities have wide discretion in using the funds and the Department does not track them by this category. EPA and EDA can track their brownfield obligations—EPA because it received its appropriations for brownfields separately and EDA because recipients identify whether they are using grant funds for brownfield-related activities in either their application or their status report on the use of the funds. While EDA can track that it awarded $114 million in grant funds during the 2 fiscal years to communities that used the funds for brownfield-related activities, compared to the $17 million in planned assistance for the agency in the Partnership Agenda, EDA does not attribute its actual brownfield obligations directly to the Partnership initiative. Rather, according to EDA managers, since the beginning of the program in 1965, the agency had been awarding grants for the revitalization and reuse of idle and abandoned industrial facilities, now called brownfields, as a core component of its mission to aid the nation’s most economically distressed communities. For example, historically EDA had funded projects to bring about the reuse of closed military facilities; now the agency is counting these activities as brownfield projects. HUD anticipates an increase in the amount of funds going to brownfields in the future. Some communities were not certain if funds under HUD’s Community Development Block Grant program could be used to address environmental contamination at certain brownfield properties. HUD’s fiscal year 1998 appropriations provided that states and communities may use Community Development Block Grant funds for the cleanup and redevelopment of brownfields, and the agency’s fiscal year 1999 appropriations extended this change to all future fiscal years. According to HUD officials, the agency will update its block grant regulations to add that addressing environmental contamination is an allowable activity under the program. HUD is also considering modifying one of the three primary national objectives under its Community Development Block Grant program—preventing or eliminating slums or blight —to clarify that since environmental contamination and economic disincentives contribute to blight, block grant funds can be used to address these concerns. HUD expects that this change will encourage the use of block grant funds for brownfields. EDA has added brownfield redevelopment as a category for which communities can receive priority consideration for grant funds. Under EDA’s program, once an applicant meets the agency’s basic grant criteria, if the applicant plans to use the funds on brownfields, the agency can give the applicant priority for a grant award. EPA, HUD, and EDA distributed their funds primarily through grants and loan guarantees to communities that used them for activities ranging from assessing a site to conducting some cleanup and on-site construction. In March 1998, we reported on EPA’s use of brownfield funding. We reported that the agency obligated the majority of its fiscal year 1997 and 1998 funds for brownfields through (1) grants to state, local, and tribal governments to assess the nature and extent of contamination at these properties in order to promote their cleanup and redevelopment; (2) seed money to these governments to establish revolving loan funds that help to pay for actual cleanup activities; (3) grants to states to develop voluntary programs that provide incentives for developers to clean up and redevelop brownfields; and (4) grants and funding support for pertinent research, outreach to community groups, job training for performing hazardous waste cleanups, and other related activities. Over those same 2 fiscal years, HUD provided funds primarily through (1) the Brownfield Economic Development Initiative (BEDI) grant program, (2) its Section 108 loan guarantee program, (3) the Community Development Block Grant program, and (4) its programs to abate the risks of lead-based paint. HUD awarded its BEDI grants specifically to communities to use the grants for activities such as site cleanup or purchasing a brownfield property and selling it to a private party at a discount price in exchange for the property’s redevelopment. HUD must make economic development grants, including the BEDI grants, in conjunction with loan guarantees for, among other things, the acquisition and rehabilitation of properties. Communities have used their Section 108 loan guarantees to pursue larger-scale redevelopment activities, including public facilities and physical development projects, such as acquiring a failed shopping center for rehabilitation. As for its Community Development Block Grant program, HUD conducted a recent survey of a small number of its grant recipients, 80 out of about 1,000 recipients, who voluntarily provided information on the use of their grants. On the basis of these data, HUD managers stated that a majority of these recipients are spending some portion of their funds on brownfield-related activities, such as cleaning up contaminated soil and groundwater and removing asbestos and lead from sites. During fiscal year 1998, HUD also awarded one community in Boston a grant under its lead-based paint program, which the community used to clean up lead-contaminated soil at approximately 56 parcels of brownfields that were then converted into housing units. In these same fiscal years, EDA provided funds for brownfield redevelopment through several of its grant programs. Communities used these funds for a variety of brownfield-related activities, including redevelopment planning; the development of inventories of abandoned, idle, and underutilized properties using geographic information systems; economic assessments of brownfield parcels; building renovation and repair, historical rehabilitation, demolition, and new construction; support for revolving loan funds for cleanup activities; and brownfield research studies. For example, one recipient used an EDA grant at a brownfield site to rehabilitate half of a large building in a former industrial complex. The environmental contamination had already been cleaned up prior to the recipients receiving the grant. Another recipient is using its grant to construct a Bioscience Park Center at a former Defense medical facility site that EDA classifies as a brownfield. EDA grant recipients have reported that their communities are only using up to about 10 percent of their funds on actual cleanup. In May 1997, the administration announced that through its Brownfield National Partnership Action Agenda, it intended to bring together the resources of more than 20 federal agencies to better coordinate federal support so as to empower communities to redevelop their brownfields. The administration reported that the agencies would provide a total of $469 million in financial assistance by implementing more than 100 brownfield action items and that this assistance was expected to result in the (1) leveraging of additional private investments in brownfields, (2) creation of new jobs, and (3) protection of greenfields. The 10 federal agencies in our review have improved both their internal and external coordination of brownfield activities and have accomplished most of their respective Partnership actions, thereby increasing the federal government’s role in brownfield redevelopment. However, the administration cannot tell if the initiative is meeting the economic goals because most agencies are not tracking these results or collecting data specific to brownfields that would allow them to do so. Officials of most of the 10 federal agencies in our review stated that they are better coordinating their actions to address brownfields, both within their own agency as well as between agencies. Individual communities and the professional associations that represent them also agreed that federal coordination had improved, although they noted that they still face the administrative burden of managing multiple federal grants and that some states and counties were not included in these efforts at improved coordination. More than half of these agencies reported that, to participate in the Partnership, they established informal internal working groups to better identify what programs and funding within their own agencies could be used to address brownfields. Moreover, agencies’ involvement in the Partnership, such as helping to select the showcase communities, has increased their awareness of other agencies’ resources available for brownfields. Consequently, agencies can better direct communities to the right sources, depending on the type of assistance the communities need. Some agencies have also signed a memorandum of understanding in which they established joint policies and procedures for conducting brownfield projects. For example, EPA and the National Oceanic and Atmospheric Administration signed such a memorandum, agreeing to, among other things, provide to coastal communities information on brownfields and training on conducting assessment, cleanup, and redevelopment activities. These efforts, according to the agency brownfield managers, have resulted in a more efficient federal approach to brownfields. In another example, the General Services Administration (GSA) helped Denver to redevelop a major brownfield property that, according to the agency’s brownfield managers, otherwise probably would have sat in the agency’s inventory. The city wanted to turn the federal property, located in a depressed area, into an industrial park that would provide jobs and commerce. The city had already attracted grants from five different federal agencies for the project but could not get the money unless GSA transferred the property. Once GSA became aware of the other agencies’ support through its involvement in the Partnership and community efforts, the agency was able to expeditiously transfer the property. Similarly, HUD brownfield managers reported that their coordination with other agencies has made them more sensitive to the agencies’ requirements. For example, these managers explained that HUD invited two EPA staff to participate on its panel to select BEDI grant recipients, and the staff provided valuable insights about how the grant recipients might manage contamination issues at their sites. In another instance, brownfield managers for the U.S. Corps of Engineers claimed that agencies were saving a significant amount of money by better coordinating and not duplicating the support they could bring to the showcase communities. Furthermore, several agencies have revised or expect to revise a number of federal regulations as a result of the Partnership. One of the more recent and significant actions that could promote more redevelopment, according to the EPA’s brownfield director, was a change in the lending guidelines for Federal Home Loan Banks that encourages lending institutions to provide financial assistance to certain brownfield projects. Perhaps the most evident example of coordination is the Showcase Communities project. According to the city development managers from two of the longest-running showcase communities, in Salt Lake City, Utah, and Dallas, Texas, they are now better aware of the federal resources available to support brownfield redevelopment and how to access them and are getting more technical and financial help from agencies. They also highlighted that federal agencies are now more willing to participate in joint efforts, such as forums and periodic teleconferences, to help the communities overcome any hurdles. The managers acknowledge that a major reason for this success is that EPA loaned a staff person to each city, under the Intergovernmental Personnel Act, for 2 years. For each city, the managers report, this staff person has been invaluable in identifying available federal resources, such as grant programs; helping the city to apply to each relevant agency for these funds; and providing technical assistance, such as information on the extent of cleanup required at brownfield sites. Staff who are managing brownfield issues for four professional associations representing cities, states, and other community stakeholders indicated that coordination among federal agencies had improved, especially in the 16 showcase communities. While federal coordination has increased, local community officials stated that little has been done to reduce the burdensome administrative processes involved in obtaining federal financial assistance. In fact, according to the city manager in Salt Lake City, the rules and regulations governing one HUD program were so onerous and time-consuming that the city chose not to pursue the funding. The HUD brownfield managers acknowledged that federal requirements to ensure grant and loan guarantee recipients are financially accountable for federal funds can be burdensome. The local managers further pointed out that cities not participating in the showcase pilots may not be able to afford to provide the type of staff resource that they had obtained from EPA to assist them in applying for and managing grants from the various agencies. Two of the associations representing state cleanup agencies and county governments also noted that some states and counties are concerned that the federal agencies are bypassing them by meeting with and providing funding directly to municipalities. The EPA brownfield manager explained that the agency had met with state and local government officials when developing the Partnership Agenda and that better coordination with states was beginning to happen at the regional level in some areas. While the manager noted that EPA has been awarding some of its brownfield assessment grants to counties, the Partnership was late in inviting counties to participate. Officials of the 10 federal agencies in our review stated that their agencies had accomplished 63 of their 71 nonfinancial action items in the agenda, or about 89 percent. In our meetings with them, the officials reported that they conducted most of these activities within their ongoing programs and had not established a formal system to separately track their progress in accomplishing the action items in the Partnership Agenda. In general, the action items included implementing changes to existing policies that had presented barriers to brownfield redevelopment, providing technical support to communities, providing information to agencies and communities on federal avenues to support brownfield redevelopment, and conducting brownfield research. For example, HUD issued a joint study with EPA on the redevelopment of brownfields, specifically spotlighting the effects of environmental hazards and regulation on urban redevelopment. The Department of Transportation (DOT) issued a new policy repealing its past policy of avoiding all contaminated properties when undertaking new transportation projects. The new policy encourages state departments of transportation, local planning organizations, and local communities to address their brownfield redevelopment in their transportation plans and projects. Agencies had not yet achieved eight of the action items. Officials reported that agencies did not complete two items and did not realize they had made the commitments for two other items. For example, the Partnership Agenda stated that HUD would fund a job training demonstration project in a low-income community, but brownfield managers stated that they did not meet the commitment because they were unaware that it had been made. Agencies dropped the remaining four action items because they were not feasible or the agencies lacked adequate funding. For example, Agriculture did not complete its studies of the economic impacts of revitalizing brownfields because it did not receive funding for this activity. Also, EPA did not issue guidance to its regions on the process to enter into memorandums of agreement with states regarding their voluntary cleanup programs because of negative comments regarding the guidance from key stakeholders, including the states. EPA recently hired a contractor to take an accounting of the more than 100 action items in the Partnership Agenda for all 20 agencies involved. EPA is asking the agencies to report which action items they achieved, which they did not achieve, why they did not achieve them, how the actions enhanced support for redeveloping brownfields, and what specific examples of this they could provide. EPA expects a final report this summer. The Partnership’s expected economic outcomes of new jobs, more private investment, and protected greenfields were estimates of potential long-term benefits, generated from economic models, that might result from the federal support for redeveloping brownfields. They were not goals that the agencies could measure and achieve within the 2-year period of the Partnership initiative. For example, HUD brownfield managers noted that it would take 3 to 5 years after construction is complete at a site before all anticipated jobs are created. Similarly, EDA brownfield managers stated that it may take up to 10 years beyond the completion of a project funded under its grant programs for a community to realize the full economic benefits from the project. While EPA brownfield managers stated that the strategy of the Partnership Agenda was to achieve these long-term outcomes through the action items, there was no documented strategy that showed how all of these individual action items, such as distributing information or providing technical support, were linked in a way that would result in these economic benefits. Also, most federal agencies generally do not have the comprehensive data necessary to determine the extent to which the economic benefits will be achieved, according to the EPA managers. For example, communities applying to EPA for grants to assess the contamination at a site may include an estimate of the number of jobs they expect to generate if they subsequently clean up and redevelop it or the amount of private sector funds they will leverage, and EPA has been compiling these voluntary estimates; however, EPA does not require recipients to submit such data and cannot verify the accuracy of these estimates. HUD will be able to track the number of jobs created at those brownfield sites addressed through its BEDI grants. HUD also tracks the number of jobs created under its Community Development Block Grant program. Recipients provide these data in their annual reports to HUD on their use of the grant funds. HUD can determine which of these jobs were created in communities with low and moderate income but cannot determine which of these jobs were specifically created at brownfields or as a result of the Partnership initiative. EDA grant recipients, beginning with fiscal year 1997 grants, are required to report on the number of permanent jobs created or retained and the private sector dollars invested as a result of brownfield projects that EDA funded. The EPA brownfield managers said that they planned to ask agencies to provide whatever data they have available on the economic benefits achieved through their grant programs and compile this information as an indicator of the success of the Partnership initiative. We provided copies of a draft of this report to EPA, HUD, and EDA for review and comment, since they are the primary agencies involved in federal brownfield efforts. We also provided portions of the draft that pertained to the remaining Partnership agencies in our review to them for their comment. The agencies generally agreed that the report accurately describes their brownfield activities. Representatives from EPA, including the Director of the Outreach and Special Projects Staff, the organizational unit that manages all of EPA’s brownfield activities, clarified the goals of the Partnership Agenda and the extent to which agencies can track economic benefits, specific to brownfields, that were generated as a result of the Partnership. We revised the report to more clearly lay out the goals of the Partnership and clarified that the three agencies’ ability to track jobs and other economic benefits generated specifically from their brownfield funding varies. HUD brownfield managers, including the Director of the Community Development Block Grant program, the primary program HUD uses to fund brownfield-related activities, pointed out that the agency was not expected to be able to report an exact amount of block grant funds obligated specifically for brownfields because this program does not have such a separate tracking category for this purpose; we revised the report to include this point. In addition, the agency clarified the extent to which it tracks the number of jobs created as a result of its grant programs. We revised the report to explain that HUD can track jobs created at brownfield redevelopment sites under its BEDI grant program but not under its block grant program. Finally, EDA brownfield managers, including the Assistant Secretary for Economic Development, provided a more detailed description of the types of brownfield-related activities the agency funds and reported that communities may not fully realize the economic benefits from funded activities for up to 10 years after construction of a project is complete. We included this expanded description of brownfield activities in the report and also revised the report to clarify that economic benefits from EDA grants would accrue over the long term. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 14 days from the date of this letter. At that time, we will send copies of this report to Senator Max Baucus, Senator Christopher S. Bond, Senator John H. Chafee, Senator Frank R. Lautenberg, Senator Barbara A. Mikulski, Senator Robert C. Smith, Representative Sherwood Boehlert, Representative Robert Borski, Representative John D. Dingell, Representative Alan B. Mollohan, Representative James L. Oberstar, Representative Michael G. Oxley, Representative Bud Shuster, Representative Edolphus Towns, and Representative James T. Walsh in their capacities as Chair or Ranking Minority Member of Senate and House Committees and Subcommittees. We will also send copies of this report to the Honorable William H. Daley, Secretary of Commerce; the Honorable Carole Browner, Administrator of EPA; the Honorable Andrew Cuomo, Secretary of HUD; and the Honorable Jacob Lew, Director of the Office of Management and Budget. Copies will also be made available to others on request. If you would like additional information on this report, please call me at (202) 512-6111. To respond to our first and second objectives—to compare federal agencies’ planned financial investment for brownfields, as stated in the Partnership Agenda, to their actual obligations for brownfields in fiscal years 1997 and 1998, and to describe the purposes of these obligations—we used a structured data collection instrument to request and then review the fiscal year 1997 and 1998 brownfield-related obligations and activities of the following agencies: (1) the departments of Energy, Health and Human Services, Housing and Urban Development, and Transportation, (2) the Economic Development, the National Oceanic and Atmospheric, and the General Services administrations, and (3) the Environmental Protection Agency. The stated financial commitments pledged by the administration to the Partnership for these eight agencies made up the total $300 million federal investment as well as $165 million in loan guarantees. We also reviewed the fiscal years 1997 and 1998 obligations for the Department of Agriculture and the U.S. Army Corps of Engineers. While these two agencies did not have an amount of planned financial investment included in the Partnership Agenda, they did have a number of nonfinancial action items to accomplish. We determined that in implementing these actions, both agencies could have obligated funds for brownfields, so we included them in our review. We interviewed those managers responsible for brownfield-related activities in these 10 agencies to confirm that they agreed with the proposed financial assistance as stated in the Partnership Agenda for them. We discussed the extent to which the agencies achieved the planned spending, and we obtained corroborating documentation where available. We also discussed with them the primary reasons why they were not able to obligate funds equal to the planned amounts in the agenda. To respond to our third objective—to determine the extent to which agencies met the Partnership’s goals and objectives—we used a structured survey to obtain the brownfield managers’ perspectives on these issues. We confirmed with the managers their agency’s interpretation of the Partnership’s goals and objectives as stated in the May 1997 announcement and determined the extent to which the agencies adopted these or other goals. We next asked them to demonstrate the extent to which they met these goals and to provide documentation where possible. Furthermore, we discussed the primary reasons for any unmet goals. We also selected 2 of the 16 brownfield showcase communities to review, ones in Salt Lake City, Utah, and Dallas, Texas. We chose these two because they were among the first communities selected for this pilot and therefore had the longest experience with it. We used a structured survey to obtain community officials’ views on the benefits and limitations of the federal agencies’ approach to providing them brownfield assistance under the pilot and on any ways in which the federal government could improve this support. Finally, we also met with representatives of several professional associations that have responsibility for brownfield issues. We selected the following associations because they represent community interests and have been most active in the area of brownfields: the U.S. Conference of Mayors, the National Association of Counties, the Association of State and Territorial Solid Waste Management Officials, and the National Association of Local Government Environmental Professionals. We discussed with them their understanding of the Partnership initiative and overall federal involvement in brownfields, the benefits and limitations they observed from this involvement, and ways in which the federal government could improve its support for redeveloping brownfields. We also obtained and reviewed the results of any studies they had done on the issue of brownfields. We conducted our work from June 1998 through March 1999 in accordance with generally accepted government auditing standards. Eileen Larence, Assistant Director DeAndrea Michelle Leach, Evaluator-in-Charge John Johnson, Senior Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on the status of 10 federal agencies' efforts to implement the Brownfield National Partnership Action Agenda, focusing on: (1) comparing federal agencies' planned financial assistance to brownfields, which are abandoned, idle, or underused industrial facilities, to their actual spending for brownfields in fiscal years (FY) 1997 and 1998; (2) describing the purposes of these obligations; and (3) determining the extent to which agencies met the Partnership's goals and objectives. GAO noted that: (1) during FY 1997 and FY 1998, the 10 federal agencies GAO examined reported that they provided about $413 million in assistance to brownfields, as compared to the Partnership's planned financial assistance of $469 million; (2) brownfield managers at the Department of Housing and Urban Development (HUD) also told GAO that the agency may have provided more financial assistance for brownfields than it reported because it provided most of its financial assistance through its Community Development Block Grant program; (3) about one-half of the total assistance that agencies provided for grant programs was from new funds made available for brownfields; (4) the remainder represented funds that the agencies had traditionally been providing to low-income and depressed communities under their community and economic development grant programs, not new or reprogrammed funds for brownfields; (5) HUD, the Environmental Protection Agency, and the Economic Development Administration were responsible for $409 million, or 99 percent of the assistance provided; (6) the three agencies used most of the funds to make grants and loan guarantees to communities; (7) the 10 federal agencies in GAO's review reported achieving better coordination and accomplishing their brownfield action items but do not have comprehensive data to determine the extent to which this will result in the expected economic benefits of jobs and private investment in brownfields; (8) the agencies reported that they increased their ongoing coordination as a result of the Partnership initiative, most noticeably through their showcase community projects; (9) the agencies also completed about 89 percent of their action items in the Partnership Agenda, such as revising policies that were barriers to brownfield redevelopment and providing communities more information about available assistance, predominantly as part of their ongoing programs; and (10) however, the extent to which the Partnership initiative is meeting the economic goals--creating new jobs, leveraging additional private investments in brownfields, and preserving greenfields--cannot be determined because most agencies are not tracking all of these outcomes or collecting data specific to brownfields that would allow them to do so.
Established in 1965 under title XIX of SSA, Medicaid is the nation’s health care financing program for low-income families and certain people who are age 65 or older or disabled. The program accounted for about $244 billion in federal and state expenditures in fiscal year 2002 and covered an estimated 53 million people. The states and the federal government share Medicaid spending according to a formula that provides a more generous federal match for states where per capita income is lower. Medicaid is an open-ended entitlement program, meaning that the federal government is obligated to pay its share of expenditures for all people and services covered under an HHS-approved state Medicaid plan. To qualify for federal matching payments, state Medicaid programs are required by law to cover certain categories of beneficiaries, including pregnant women and children with family incomes below specific limits, as well as individuals with limited income and assets who are age 65 or older or disabled. State programs are also required to cover certain services, including physician and hospital services and nursing home care. As long as states meet federal requirements and obtain HHS approval for their state Medicaid plans, they have considerable flexibility in designing and operating their programs. For example, states may choose to expand coverage to seniors whose incomes are above statutory limits, and all states have opted to provide prescription drug coverage. In addition, section 1115 of SSA permits the Secretary of HHS to waive certain statutory requirements applicable to Medicaid to allow states to provide services or cover individuals not otherwise eligible for Medicaid and to provide federal funding for services and populations not usually eligible for federal matching payments. The Pharmacy Plus initiative allows states to provide a prescription drug benefit to certain Medicare beneficiaries, specifically seniors and disabled people, with incomes at or below 200 percent of FPL. Typically, Medicaid eligibility under an approved state plan provides access to all state Medicaid-covered services, but eligibility under a Pharmacy Plus demonstration covers only a prescription drug benefit. The premise behind the initiative is that expanded access to medically necessary drugs will help keep low-income seniors healthy enough to avoid medical expenses that could cause them to “spend down” their resources to the point of Medicaid eligibility. The initiative assumes that budget neutrality for pharmacy-only coverage can be achieved by savings to Medicaid from fewer seniors’ enrolling for full benefits, as well as from improved access to prescription drugs, improved service delivery or medication management, and better management of drug benefit costs. Unlike some other section 1115 demonstration waivers, the Pharmacy Plus initiative requires a participating state to accept a fixed spending limit as part of its budget neutrality agreement with HHS. This spending limit— sometimes called an aggregate spending limit or global budget cap— applies not only to services and beneficiaries in the state’s demonstration drug program, but also to all services for all Medicaid seniors in the state. The Pharmacy Plus budget neutrality approach limits the amount the federal government will match for a demonstration according to expected growth in both service costs and enrollment (see app. I). Once a state has reached its Pharmacy Plus spending limit, it cannot receive additional federal matching dollars for any Medicaid services for seniors in the state, nor can the state restrict enrollment of seniors who qualify for full Medicaid benefits. Under the Pharmacy Plus scenario, a state accepts the financial risks inherent in a fixed budget cap for unanticipated changes in both cost and enrollment growth. For some other section 1115 demonstrations, budget neutrality is based on a projected per capita cost for each demonstration beneficiary. This other scenario sets a limit on spending per person, but because federal matching funds are available for all people who enroll, a state does not have to accept financial risk for unexpected growth in enrollment. As of May 2004, HHS had approved four states’ Pharmacy Plus demonstration proposals, denied two, and considered proposals from nine other states. All four approved demonstrations—Florida, Illinois, South Carolina, and Wisconsin—are to operate for 5 years, during which time they might enroll a total of half a million low-income individuals age 65 or older for the new prescription drug coverage. HHS denied two demonstration proposals, from Delaware and Hawaii, because they were not consistent with Pharmacy Plus guidelines. Of the remaining nine proposals, one was withdrawn by the state and others have been on hold since fall 2003, when Congress was considering Medicare prescription drug legislation. At the time we completed our work, legislation providing a new drug benefit through Medicare had been enacted, but HHS had not determined how the new drug program would affect the Pharmacy Plus initiative. HHS has approved Pharmacy Plus demonstrations for low-income seniors in four states: Florida, Illinois, South Carolina, and Wisconsin. As of May 2004, all four demonstrations had been implemented and under way for at least 17 months: Illinois’, Florida’s, and Wisconsin’s demonstrations were implemented in 2002, South Carolina’s in 2003 (see table 1). Together, the four approved demonstrations are projected to enroll as many as 527,800 individuals for Medicaid prescription drug benefits only; as of April 2004, they reported combined enrollment of nearly 372,200 people. Illinois’ demonstration is the largest, with expected enrollment for the drug benefit of more than 250,000 seniors over 5 years. As of April 2004, more than 192,600 people were enrolled in Illinois’ demonstration, the majority of them moved into the Medicaid program from an existing state-funded pharmacy assistance program. All the demonstrations except Florida’s are approved to enroll seniors with incomes at or below 200 percent of FPL, the maximum eligible income established in HHS’s Pharmacy Plus guidance. As approved, Florida’s demonstration covers seniors with incomes from 88 to 120 percent of FPL, but in September 2003, the state submitted an amendment to expand income eligibility to 200 percent of FPL. Illinois also applied in March 2003 to amend its approved demonstration to expand eligibility, in its case to include seniors with incomes at or below 250 percent of FPL. The terms of Illinois’ demonstration approval specifically permit the state to seek this amendment, as long as the state submits data supporting its ability to cover this expansion population at no additional cost to the federal government. As of March 2004, HHS was reviewing both amendments. Projected 5-year costs vary among the four approved demonstrations. For Florida, Illinois, South Carolina, and Wisconsin, total combined federal and state Medicaid spending on the new drug benefit alone is expected to be more than $3.6 billion over 5 years, of which the federal share would be approximately $2.1 billion. The combined federal and state Medicaid spending limits for the four demonstrations—for services to all Medicaid seniors in the four states—would total $44 billion over 5 years, with a federal share of at least $25 billion. The estimated 5-year costs solely for the drug benefit range from $477 million in Florida to $1.4 billion in Illinois, and combined 5-year federal and state spending limits (based on projected costs for services to all Medicaid seniors) range from $5.0 billion in South Carolina to $16.7 billion in Florida. When they applied, three of the four states with approved demonstrations already operated state-funded pharmacy assistance programs for seniors. Most beneficiaries eligible for these programs are also eligible for Pharmacy Plus coverage. HHS allows the states to subsume all or a portion of an existing program under a demonstration, as long as the states’ demonstrations propose to expand either the number of beneficiaries or the scope of drug coverage. In other words, the state may not simply secure federal matching dollars for the costs of an existing state-funded drug program with no expansion. To meet this condition, states with approved demonstrations either raised income eligibility thresholds or expanded the scope of drug coverage beyond that of their existing state programs. For example, Florida doubled its maximum monthly benefit from $80 to $160 per person, and South Carolina expanded eligibility to include seniors with incomes from 175 through 200 percent of FPL. Illinois’ demonstration offered a more comprehensive drug benefit than its state-funded program did. Wisconsin did not previously have a state-funded pharmacy assistance program for seniors. In 2003, HHS denied Pharmacy Plus demonstration proposals from two states, Delaware and Hawaii. (See app. II for descriptions of denied, withdrawn, and pending proposals.) Delaware’s proposal was denied primarily because HHS required that the state expand beyond the existing state-funded program and limit coverage to seniors with incomes at or below 200 percent of FPL. Delaware’s state-funded pharmacy assistance program already covered seniors and disabled adults with incomes up to 200 percent of FPL or whose prescription drug costs exceeded 40 percent of their annual incomes. For this reason, the state could not expand either eligibility or coverage and stay within Pharmacy Plus guidelines. Although Delaware proposed adding a pharmacy benefit management component to monitor appropriate prescription use and to control costs, HHS found this proposed change to the existing program insufficient. Hawaii proposed to make prescription drugs available at the discounted Medicaid rate to state residents of all ages with family incomes at or below 300 percent of FPL. This benefit was to be funded through participant cost sharing, manufacturer rebates, and a fixed state contribution of $1 per prescription. HHS’s denial was based primarily on the request to cover individuals with incomes up to 300 percent instead of 200 percent of FPL. Other reasons for the denial included the proposed coverage for all state residents, instead of targeting seniors and people with disabilities, and the minimal state financial participation of $1 per prescription in the first year of the demonstration. From January 2002 through May 2004, HHS considered Pharmacy Plus demonstration proposals from nine other states: Arkansas, Connecticut, Indiana, Maine, Massachusetts, Michigan, New Jersey, North Carolina, and Rhode Island. As of May 2004, eight were still pending; one proposal, from Massachusetts, had been withdrawn. Most proposals would cover seniors with incomes at or below 200 percent of FPL; several would also cover adults with disabilities. The drug benefits would generally be comprehensive and require participant cost sharing, which in some cases would include an annual enrollment fee and 20 percent co-payment for each prescription. All but one of the states with pending proposals have state-funded pharmacy assistance programs that they propose to include in whole or in part in their demonstrations. (App. II describes these demonstration proposals.) As of May 2004, most of the pending proposals were not under active review by HHS primarily because the department had not determined the effect of the Medicare prescription drug legislation on the Pharmacy Plus demonstration proposals. HHS officials told us in October 2003 that Arkansas, Rhode Island, and Indiana officials had asked that review of their states’ proposals be put on hold until after Congress had completed consideration of the Medicare legislation. At that time HHS was still reviewing a proposal from North Carolina but regarded proposals from four other states as inactive because longtime negotiations with those states had reached an impasse. Connecticut and New Jersey, for example, already had broad state-funded drug coverage for seniors with incomes up to 200 percent of FPL. In such cases, HHS has been unwilling to approve federal financing for existing state-funded programs. The Medicare Prescription Drug, Improvement, and Modernization Act of 2003 (MMA) will provide seniors access to a Medicare-covered prescription drug benefit and will likely affect how HHS and the states manage the Medicaid Pharmacy Plus initiative. This law gives Medicare beneficiaries the opportunity to enroll for prescription drug coverage to begin on January 1, 2006, and, as an interim measure, the opportunity to enroll for Medicare-endorsed drug discount cards beginning in June 2004. It also directs HHS to establish effective coordination between Medicare plans and state Medicaid and pharmacy assistance programs and to establish a commission to address these and other transition issues. In 2006, the Medicare drug benefit will replace Medicaid as the primary source of prescription drug coverage for low-income seniors who would have been eligible for both full benefits under Medicaid and drug benefits under Medicare plans. Under MMA, individuals with limited assets and incomes below 150 percent of FPL will be eligible for federal subsidies to assist with the drug benefit’s cost-sharing requirements. But because Pharmacy Plus demonstrations in Illinois, South Carolina, and Wisconsin cover individuals with incomes above 150 percent and at or below 200 percent of FPL regardless of other assets, some current demonstration beneficiaries may not qualify for these subsidies. Pharmacy Plus beneficiaries are likewise ineligible for the Medicare drug discount cards. As of May 2004, HHS indicated it was considering how enactment of the new law would affect Pharmacy Plus demonstrations and proposals. Officials from the four states with approved demonstrations told us in December 2003 that they were uncertain how the law would affect their demonstrations, but they had no plans to end the demonstrations early. After the Medicare prescription drug benefit begins in 2006, some demonstrations could be discontinued or modified. Early termination could have an impact on the demonstrations’ budget neutrality, which often depends on savings in later years to offset higher start-up costs. Officials in Illinois and Florida indicated in December 2003 that their pharmacy demonstrations might be converted to state-funded programs in 2006. HHS has not adequately ensured that the spending limits it has approved for Pharmacy Plus demonstrations will be budget neutral—in other words, that the federal government will spend no more under the demonstrations than without them. For all four demonstrations, HHS approved 5-year spending limits based on projections of cost and beneficiary enrollment growth that exceeded benchmarks that department officials told us they considered in assessing the reasonableness of states’ demonstration proposals. These cost and enrollment growth benchmarks incorporate states’ historical experience and expectations for Medicaid program growth nationwide. The discrepancies between the growth benchmarks and the approved growth rates were greatest for Illinois and Wisconsin. Neither HHS’s negotiations with the states nor the department’s rationale for approving higher-than-benchmark growth rates is well documented. Had HHS based the 5-year demonstration spending limits on the benchmark growth rates, the federal share of approved spending would be considerably lower, particularly for Illinois and Wisconsin: specifically, $1 billion lower in Illinois and $416 million lower in Wisconsin. For Florida and South Carolina, the federal share of approved spending would have been $55 million and $42 million lower, respectively. HHS based the Pharmacy Plus demonstration spending limits it approved on a range of estimated future growth rates for cost per beneficiary and for enrollment, which in some cases exceeded benchmarks the department told us it considered in assessing the reasonableness of states’ proposals. A standard Pharmacy Plus application form developed by HHS and a technical guidance document are the chief sources of criteria and formal guidance to states for developing demonstration proposals. But HHS has not established written criteria for how it reviews and approves the growth rates that states propose. These growth rates are key elements in the budget neutrality negotiations between states and the federal government because higher rates result in more generous spending limits, which represent the federal government’s agreed-on maximum spending for all the states’ Medicaid seniors during the demonstrations. An inappropriately high spending level can represent a higher federal liability than warranted. The process used by HHS and the states to determine whether states’ proposed Pharmacy Plus demonstrations will be budget neutral requires comparing two cost estimates: (1) projected 5-year costs of a state’s existing Medicaid program for seniors (“without-demonstration costs”) and (2) projected 5-year costs of the state’s existing program plus the drug benefits and beneficiaries added by the demonstration (“with- demonstration costs”). These calculations factor in projected growth in costs and enrollment each year. As long as projected with-demonstration costs do not exceed projected without-demonstration costs, the demonstration can be approved as budget neutral. As a result, the projected costs of a state’s existing, without-waiver Medicaid program for seniors effectively sets the spending limit for all services provided to all Medicaid seniors in the state for the 5-year demonstration term. Appendix I outlines the basic steps HHS follows in setting Pharmacy Plus demonstration spending limits. To determine budget neutral spending limits for the pharmacy demonstrations, HHS officials told us they consider the following for estimating growth in costs and enrollment through the course of the demonstrations: For cost growth per beneficiary, similar to guidelines for other types of section 1115 demonstrations, HHS seeks to approve a growth rate equal to the lower of either the state’s historical average annual growth in per- beneficiary cost (that is, the average annual rate for the 5 years before the demonstration proposal) or the nationwide projected growth rate, developed by CMS’s Office of the Actuary, for Medicaid cost per beneficiary age 65 or older. For enrollment growth, HHS considers the state’s historical average annual growth in enrollment as a starting point and, to a lesser extent, the CMS Actuary’s nationwide rate, but it allows states to present a rationale for a higher rate that anticipates rising future enrollments. HHS’s approved growth rates in some cases exceeded these benchmarks (see table 2). For per-beneficiary cost growth rates in Florida, Illinois, and Wisconsin, HHS did not approve the lower of either the state’s historical average rate or the CMS Actuary’s rate of 6.3 percent. Similarly, for beneficiary enrollment growth rates, HHS approved rates for Illinois and Wisconsin that exceeded both the states’ historical experience and the CMS Actuary’s 1.8 percent projected annual growth rate. In Illinois’ case, the approved rate for beneficiary enrollment growth—5 percent per year over the 5-year demonstration—was considerably higher than the state’s 5-year historical average enrollment growth of 1.6 percent per year. HHS’s basis is unclear for approving growth rates higher than the benchmarks in some cases, particularly for approving higher enrollment growth rates for Illinois and Wisconsin. The department’s negotiation process with these two states, during which officials reached agreement on allowed growth rates, was not documented, nor was its rationale for approving rates that differed from the lower of state historical experience or the CMS Actuary’s projections. In particular, HHS’s internal decision memorandums—which described the factors that HHS, CMS, OMB, and others considered in reviewing the demonstrations and which are not publicly available—did not provide the rationale for the approved spending limits, and neither did the publicly available demonstration approval letters. HHS and state officials told us that Illinois and Wisconsin used a variety of arguments to convince the department that their situations warranted higher enrollment growth rates. But the states provided little specific documentation to HHS or to us to support these arguments. For example: Illinois asserted that its projected annual enrollment growth rate for the demonstration years from 2002 through 2007 should be significantly higher than its 5-year average historical growth rate of 1.6 percent, because income eligibility levels for seniors in its Medicaid program increased from 41 to 100 percent of FPL from July 2000 through July 2002. As support, the state provided HHS with updated Medicaid enrollment data—which were more recent than those included in the original demonstration application and showed increased growth rates for seniors compared with earlier years—but these rates were still lower than the 5 percent HHS approved and did not raise the historical average to 5 percent. The state did not provide documents with actuarial projections of the estimated number of people expected to enroll in Medicaid because of the change in eligibility criteria. Illinois justified applying the 5 percent annual growth rate to all 5 years of the pharmacy demonstration by providing a chart showing that enrollment in a different state program, SCHIP, had grown more than 5 percent per year on average for 3 years after that program’s eligibility criteria were expanded. In our view, however, Illinois’ SCHIP enrollment experience with children does not provide a reasonable basis for predicting enrollment by seniors in the Pharmacy Plus demonstration. Wisconsin asserted that its projected annual enrollment growth rate for the demonstration years should be significantly higher than either its 5-year unadjusted historical growth rate of 0.01 percent or the 0.12 percent rate based on 3 years of historical data reported in its application because of the anticipated effects of a nationwide Social Security Administration mail outreach program to low-income Medicare beneficiaries. This outreach program informed seniors enrolled in Medicare about other benefits, including Medicaid assistance for Medicare cost-sharing requirements, for which they might qualify. Wisconsin officials told us they proposed a 4 percent future annual enrollment growth rate for seniors in the expectation that this outreach program, along with factors including an aging population and the economic downturn, would increase Medicaid enrollment. According to HHS, Wisconsin did not document any projections of how many newly eligible Medicaid individuals could be prompted to enroll after the Social Security Administration outreach mailing. Instead, it submitted information based on a review of a similar outreach effort in Minnesota. According to Wisconsin state officials, during negotiations HHS proposed 1 percent as a more reasonable growth rate, and HHS and state officials agreed to an approved enrollment growth rate of 2 percent per year. Our related work suggests that Wisconsin may be justified in claiming some increase in Medicaid enrollment as a result of the outreach program, but the effect appears to be less than 1 percent. Notably, although the Social Security Administration mail outreach program was nationwide, HHS did not consider its effects when approving enrollment rates for other states. Application of benchmark rates for projected per-beneficiary cost and enrollment growth would have produced lower spending limits for all four approved Pharmacy Plus demonstrations (see table 3). Benchmark-based limits on combined federal and state spending would be approximately $3 billion lower over 5 years than what HHS approved for the four demonstrations, and the federal share alone would come to about $1.6 billion less. The higher-than-benchmark growth rates HHS approved for Illinois and Wisconsin accounted for most of these differences. Had the spending limit for Illinois’ demonstration, in particular, been based strictly on the benchmark rates, combined federal and state spending would have been almost $2.2 billion, or 15 percent, lower, and the federal government’s liability under the demonstration (at the state’s 50 percent federal matching rate) lower by more than $1 billion. The difference is less pronounced for Wisconsin, where the approved federal and state spending limit exceeds what it would have been had benchmark rates been applied by about $713 million, translating into about $416 million in additional federal spending. The spending limits HHS approved for Illinois and Wisconsin exceed estimates based on consistent application of the benchmark growth rates by 15.4 percent and 8.5 percent, respectively. The limits approved for Florida and South Carolina, while not budget neutral compared with the benchmark spending estimates, reflect relatively small differences. Florida’s approved spending limit exceeds the benchmark estimate by less than 1 percent—$94 million of a 5-year approved federal and state spending limit of nearly $16.7 billion—and South Carolina’s approved spending limit exceeds the benchmark by 1.2 percent, or $60 million. CBO has similarly reported that Pharmacy Plus demonstrations are likely to increase federal Medicaid spending. Before passage of MMA, CBO estimated that the Pharmacy Plus demonstrations would add about $18 billion to federal Medicaid spending over the 10 years from 2004 through 2013. According to CBO officials, the agency considered a range of scenarios for how the initiative might grow with new demonstration approvals and estimated the initiative’s overall effect on Medicaid spending. The officials told us that CBO did not include any of the demonstrations’ projected savings in its analysis because it did not find the argument that savings would occur convincing. Neither data from state experience nor other research supports the savings assumptions necessary for budget neutrality in the Pharmacy Plus demonstrations. In developing their demonstration proposals, states assumed that keeping low-income seniors healthy—thus preventing them from spending down their financial resources on health services and “diverting” them from Medicaid eligibility—would generate savings to help offset the increased costs of providing a new drug benefit. Without state- specific evidence, HHS approved savings assumptions negotiated with the states, including significant projected reductions in Medicaid senior enrollment. But the limited research available suggests that potential health care savings due to improved access to prescription drugs are likely to be much less than the levels the states assumed and HHS approved. Had more conservative savings assumptions been used to estimate the demonstrations’ costs, the proposals likely could not have been approved as budget neutral. Moreover, concerns have arisen about what actions states might take to control spending on behalf of seniors if estimated savings do not accrue and states reach or exceed their spending limits under the demonstrations. The approved Pharmacy Plus demonstrations count on expected savings based on reductions in the projected number of seniors who will enroll in states’ Medicaid programs—ranging from a 3 percent reduction in Florida to a 25 percent reduction in South Carolina over the demonstrations’ 5 years. The dollar amounts of combined federal and state savings projected under these assumptions in the demonstrations’ budget neutrality calculations range from $480 million in Florida to $2 billion in Illinois (see table 4). To project the extent to which Pharmacy Plus would reduce its new enrollment of Medicaid seniors, and thus its total senior enrollment, Florida made the relatively conservative assumption that the drug benefit would enable seniors to avoid Medicaid eligibility for 1 year; after 5 years, the state’s total projected number of Medicaid seniors would be 5,900 (3 percent) lower with the demonstration than without it. The other states, in contrast, assumed that everyone diverted in each year of their demonstrations would remain out of Medicaid throughout the full demonstration period and would not, for example, enter a nursing home, which often results in Medicaid eligibility. As a result, Illinois, South Carolina, and Wisconsin projected reductions of nearly 20 percent or more in Medicaid senior enrollments at the end of 5 years. Had these states made more conservative assumptions—assuming, for example, as Florida chose to, that providing access to prescription drugs would delay seniors’ entry into Medicaid by only 1 year instead of 5—their projected with- demonstration costs would have exceeded projected without- demonstration costs and would not have been budget neutral. Although states’ demonstration proposals aim to achieve savings by expanding seniors’ access to prescription drugs and improving their health, in practice it appears that some states’ estimates of expected savings may have been derived in part by determining how much in savings was needed to demonstrate budget neutrality. In their proposals, none of the three states that previously had state-funded pharmacy assistance programs (Florida, Illinois, and South Carolina) provided data from those programs that specifically supported such high projected savings. Based on conversations with Wisconsin health care financing officials and a review of documents, we found that the state’s demonstration savings estimates were a residual of the budget-negotiating process, derived from determining how much was needed in savings to demonstrate budget neutrality, rather than from research or data about what was realistic. The premise that Pharmacy Plus demonstrations will generate savings by keeping low-income seniors from becoming Medicaid-eligible is not supported by research. In a previous report, we reviewed the research studies cited in Illinois’ demonstration proposal and found that they did not sufficiently support the state’s theory that a full drug benefit for low- income seniors would yield the projected level of savings. Although these studies indicated that access to prescription drugs benefited people in poor health, they all focused on people who already had specific diagnosed conditions, such as diabetes, heart disease, or HIV, rather than on a general population of seniors. An extensive 2003 review of research examining drug coverage for low-income seniors found relatively few studies about the effect on Medicaid spending of expanded access to a broad prescription drug benefit. The one study this review considered most relevant, conducted in the mid-1980s, assessed Pennsylvania’s state- funded program, Pharmaceutical Assistance Contract for the Elderly (PACE), and found that despite high enrollment, Medicaid entry among PACE participants was neither prevented nor delayed enough to have a discernible effect on the state’s overall Medicaid budget. Other studies of broad prescription drug benefits for low-income seniors, including one of New York’s program, found some reductions in participants’ health care costs but mainly for inpatient hospital care, which, for people age 65 or older, is covered by Medicare rather than Medicaid. Still other studies in this review examined the more limited question of how access to appropriate drugs affects people already suffering from specific illnesses. Such research sheds little light on the cost-effectiveness of offering comprehensive drug benefits to a broad population of low-income seniors. Some states that have not submitted Pharmacy Plus proposals examined the diversion and savings assumptions behind the demonstrations and found that they would not likely be realized. For example, in considering whether to apply for a demonstration, Minnesota found a substantial risk that seniors receiving only a drug benefit would eventually become Medicaid-eligible over a 5-year follow-up period. In its optimal model, the study estimated that to generate enough savings to offset the new drug costs, the risk of Medicaid entry would have to be reduced by 50 percent for non-nursing-home enrollees and by 30 percent for those who become eligible after entering a nursing home. Minnesota Medicaid officials concluded that this scenario was not realistic and dropped the state’s Pharmacy Plus demonstration proposal. Pennsylvania also conducted a Pharmacy Plus demonstration feasibility study for PACE and the related Pharmaceutical Assistance Contract for the Elderly Needs Enhancement Tier (PACENET) programs, which together enrolled about 270,000 seniors in 2002. The study found that to offset drug benefit costs, the programs would need aggressive cost containment, through such approaches as increased co-payments, reduced provider reimbursements, and a preferred drug list. In addition, the study noted that in states with generous drug benefits, savings from expansion to more seniors are particularly difficult to realize because most beneficiaries who would have avoided expensive nursing home care have already done so. As of March 2004, Pennsylvania had not submitted a Pharmacy Plus demonstration proposal. Although it is early in demonstration implementation, we and others have raised concerns about how states may be affected if savings under Pharmacy Plus do not accrue and the states’ spending reaches or exceeds HHS’s approved spending limits. We noted in our July 2002 report that the Illinois Pharmacy Plus demonstration, as approved, makes several risky assumptions with regard to the extent of the expected savings. In such cases the federal government would not be at financial risk, but the states would be, because the spending limits cover services for all the states’ Medicaid seniors. Any expenditures for Medicaid seniors beyond the demonstration’s federally matched spending limit would be entirely the state’s responsibility. Officials in Florida and Wisconsin expressed concerns that their demonstration spending limits, based on fixed rates of growth projected over 5 years, could not be adjusted to reflect unpredictable changes in costs and enrollment growth. One study has raised concerns about the potential effects on Medicaid seniors, noting that as state spending approaches the limit of what the federal government will match, states may feel pressed to reduce optional expansions of eligibility or optional benefits. States could also try to control spending without reducing eligibility or services by lowering provider reimbursements—a step already taken in Illinois, although not in response to pharmacy demonstration enrollment or spending—or by implementing preferred drug lists. As of February 2004, efforts by the states and HHS to evaluate and monitor the four approved demonstrations, and to address some of the research questions the Pharmacy Plus initiative raises, were in their early stages. The four states with approved demonstrations had taken few steps toward implementing the evaluation plans required as a condition of approval, and an independent evaluation of two of the demonstrations, contracted by HHS and started in October 2002, was not scheduled to report until September 2005. In the interim, HHS has not ensured that the states’ required progress reports contain sufficient information for monitoring whether the demonstrations are functioning as intended or that these reports are submitted in a timely manner. As a condition of Pharmacy Plus approval, HHS requires states to design and carry out an evaluation and to report their results after the demonstration ends. States are required to submit a plan for this evaluation in their proposals and in the operational protocols that HHS approves before states begin the demonstrations. Although the four states with approved Pharmacy Plus demonstrations submitted the required evaluation plans—containing research hypotheses, possible outcome measures, and data needs—as of February 2004, they had taken few steps to put their evaluation plans into practice. As HHS requires, the four states’ initial proposals and operational protocols included plans for how they would evaluate whether their demonstrations were working as intended. With some variations, all the plans proposed to address the overall research question of how providing a pharmacy benefit to non-Medicaid-covered seniors would affect Medicaid costs, service use, and future eligibility trends, including whether savings achieved by diverting individuals from Medicaid eligibility would offset the benefit’s cost. The first demonstration proposal, from Illinois, initially contained an extensive plan to assess demonstration outcomes; the plan later changed significantly. The initial plan proposed that the state collect data from sources such as Medicaid and Medicare claims systems, surveys of participants or case-study interviews, and demonstration- specific claims. In terms of outcome measures, Illinois’ plan proposed comparing seniors who do have the drug benefit with seniors who do not on such measures as hospitalization rates, health care service costs, use of emergency room services, and rates and length of nursing home stays. A later version of Illinois’ plan (as described in the state’s operational protocol), however, calls for using existing Medicaid claims data for only one outcome measure, Medicaid spending for seniors. Both South Carolina and Wisconsin adopted Illinois’ relatively extensive initial evaluation plan in their demonstration proposals, and as of February 2004, neither South Carolina nor Wisconsin had changed its proposed plan. Florida, which did not submit an evaluation plan in its demonstration proposal, provided a two-paragraph discussion in its operational protocol. This discussion listed several hypotheses and indicators to be monitored, noted that data would be collected using the state’s current Medicaid system, and gave no details about how or when the plan would be implemented. As of February 2004, the states had taken few steps to implement their demonstration evaluation plans or to determine how they would collect or analyze data to support their evaluations. States’ evaluation activities were generally limited to collecting and reporting to HHS data from their existing Medicaid data systems. Although plans for Illinois, South Carolina, and Wisconsin call for starting their evaluations at the start of their demonstrations to draw on data about services used before and throughout beneficiaries’ enrollment, these states and Florida indicated they were just beginning to collect and report data to implement their evaluation plans: Florida and South Carolina officials told us that they had not decided whether their evaluations would be designed and conducted by the state Medicaid agency or by an outside entity such as a university. Neither state had developed an evaluation implementation schedule. Illinois and Wisconsin reported providing extensive state data for HHS’s independent evaluation of their demonstrations but, at the time of our review, had not begun their own evaluations. State officials told us they understood that participating in the independent evaluation would exempt them from conducting their own evaluations. But HHS officials told us that state evaluations were still required. HHS has contracted with independent university researchers for an extensive evaluation of the Pharmacy Plus demonstrations in Illinois and Wisconsin. The evaluation’s goal is to document achievements and difficulties in implementing a Pharmacy Plus demonstration, as well as to identify impacts on entry into Medicaid and on costs to Medicare. According to HHS, the evaluation aims to address whether providing prescription drug benefits to non-Medicaid seniors will keep individuals relatively healthy, divert them from full Medicaid eligibility, and thus lower Medicaid and Medicare costs. To address these issues, the evaluation contract calls for four components of work, including (1) site visits to Illinois and Wisconsin to describe the demonstrations and their implementation; (2) telephone surveys of demonstration beneficiaries in those states about their health status, access to health care, and prior drug coverage; (3) analysis of Medicaid, Medicare, and demonstration claims data to assess patterns of drug use and effects on Medicaid and Medicare costs; and (4) an analysis of enrollment trends in each state’s Medicaid program to determine if diversion assumptions are met. In addition, the evaluation aims to compare the experiences of demonstration beneficiaries with a similar population in another state that does not offer a prescription drug benefit. Final results for all components of this planned 3-year evaluation, which began in October 2002, are scheduled to be reported to HHS by September 2005. Specifically, a final report to HHS on the patterns of drug use is due in September 2004; final reports on the demonstrations’ cost effects on Medicaid and Medicare are due in September 2005. The evaluation contract does not indicate when results from the work may be available to other researchers or the public. According to the HHS evaluation project officer, the independent evaluators completed state site visits to Illinois and Wisconsin in July 2003 for the descriptive work component and submitted draft reports to HHS in December. These reports were in review as of March 2004, and the project officer expected them to be approved and posted on HHS’s Web site, although he did not know when posting would occur. A report containing results from the second evaluation component, the telephone surveys of beneficiaries, was expected later in 2004. HHS’s monitoring and reporting requirements, which the states agree to carry out under HHS oversight, are set forth in the terms and conditions attached to each demonstration’s approval letter. Although HHS and the states participated in required telephone conference calls to monitor the demonstrations’ start-up, HHS has not ensured that all states submit the required quarterly and annual progress reports. The lack of sufficient and timely information from progress reports may impair the department’s ability to monitor demonstration operations and accomplishments. Monitoring and reporting requirements are not as clearly established for the Pharmacy Plus initiative as for the Health Insurance Flexibility and Accountability (HIFA) initiative: The HIFA and Pharmacy Plus initiatives both require states to participate with HHS in monthly telephone monitoring calls. For pharmacy demonstrations, however, monthly calls are required for 6 months after implementation and only as needed thereafter; for most approved HIFA demonstrations, monthly calls are unlimited. States with approved HIFA demonstrations are required to submit quarterly progress reports in a format agreed upon with HHS, and demonstration terms and conditions describe the required content of these reports. The terms and conditions for Pharmacy Plus demonstrations are less specific regarding progress report format and content. HIFA demonstrations are expected to submit separate annual reports that discuss progress in evaluating the demonstrations, including results of data collection and analysis to test research hypotheses. Pharmacy Plus annual reports, in contrast, may be combined with or include the fourth quarterly progress report, may follow the same broad content guidelines as quarterly reports, and are not required to report progress in evaluation. As of March 2004, HHS and the four Pharmacy Plus states had participated in the initial monitoring phone calls and begun to gather data on how their demonstrations were working. HHS and the states confirmed participating in monthly telephone calls for the first 6 months and then agreeing to maintain contact as needed. An HHS official told us the department did not set agendas or document these informal contacts, which focused on demonstration operations as states tracked enrollment and began to gather information about drug use and expenditures for new beneficiaries. States reported taking some steps to develop the capacity to report on their demonstrations. Florida, Illinois, and Wisconsin, for example, reported having or developing data management systems containing state Medicaid and other data that are capable of generating demonstration- specific reports. South Carolina expected to rely on existing Medicaid data systems. None of the states, however, were tracking the number of demonstration enrollees who had become eligible for Medicaid, although officials in three states reported the ability to do so. Further, the states had not provided information to HHS to assess whether diversion savings were occurring. The information that HHS requires states to report has been insufficient for determining whether the demonstrations are operating as intended. According to one HHS official, HHS has not prescribed a standard format for, or specific information to be provided in, either the quarterly or annual progress reports; rather, the department works with the states to obtain needed information. The Pharmacy Plus terms and conditions stipulate that written quarterly and annual progress reports contain, at minimum, (1) a discussion of events during the quarter, including “enrollment numbers, lessons learned, and a summary of expenditures”; (2) notable accomplishments; and (3) problems and questions that arose and how they were resolved. The same HHS official told us that in response to these general requirements, states’ progress reports did not always include all information considered useful for monitoring purposes. For example, HHS reported that officials were working with Illinois to obtain additional information to complete its draft annual progress report. Illinois’ six-page annual report, submitted in September 2003, reported only on new demonstration beneficiaries and did not include first-year starting or ending enrollment or cost information for the state’s Medicaid senior program as a whole—the services and population affected by the Pharmacy Plus spending limit. One HHS official told us that after review of Illinois’ report, these cost and enrollment data were specifically requested to assess whether the new drug benefit was keeping seniors from becoming eligible for full Medicaid benefits. As of February 2004, Illinois had not provided this information. Finally, HHS has not insisted on timely submission of the required quarterly and annual reports. Although Pharmacy Plus terms and conditions specify that quarterly reports are due 60 days after the end of the quarter, and annual reports are due 60 days after the end of the fourth quarter, HHS has not ensured that states submit the reports on time. Again, the department’s policy is to work with the states toward compliance. As of January 2004, Florida and Wisconsin had submitted all required written quarterly reports, mostly on time, while South Carolina had submitted only one of three required progress reports. Illinois, whose demonstration was the first to be implemented, did not submit any of the three required quarterly reports before submitting its combined fourth quarterly and first annual report early in September 2003. HHS’s approval and monitoring of state demonstrations under the Pharmacy Plus initiative raise cost and oversight concerns and, ultimately, program concerns. The department’s approval of four states’ demonstrations raises questions about HHS’s basis for its decisions. Because HHS based the spending limits it approved on higher-than- justified growth rates, these spending limits do not, in our view, represent reasonable estimates of demonstration costs over the 5-year trial periods and are not budget neutral. It was difficult to assess the reasonableness of the spending limits themselves, given that they were decided upon through an undocumented negotiation process, and neither public nor HHS internal documents stated the rationale for approving higher growth rates. We found that if HHS’s benchmarks had been used to establish the spending limits, the federal government’s liability for the four demonstrations could have been $1.6 billion lower over 5 years. Moreover, the approved demonstrations rely on highly questionable assumptions about the extent to which savings would accrue to Medicaid from improved health of people receiving the new pharmacy benefit, particularly since many of them already had pharmacy benefits through existing state-funded programs. In addition, the Pharmacy Plus initiative raises important evaluation questions about how improved access to prescription drugs may affect seniors’ health and Medicaid and Medicare costs. Although some of these questions will likely be addressed by the independent evaluation of two states’ demonstrations, in the interim HHS does not appear to be ensuring that states provide sufficient, consistent, and timely information for demonstration monitoring or that states begin implementing their own evaluation plans. The limited available information on how these demonstrations are operating makes it difficult to assess whether they are operating as intended. The concerns about HHS’s approved Pharmacy Plus demonstrations parallel those we have raised about other section 1115 waiver demonstration approvals over the past decade. These include the extent to which the department is protecting the Medicaid program’s fiscal integrity and the need for clear criteria and a public process when HHS reviews and approves demonstrations. Along with the authority to waive Medicaid requirements, and the flexibility given states to test new approaches for delivering services more efficiently and effectively, comes the responsibility for making decisions based on clear criteria and for monitoring the demonstrations and learning from them. More can and should be done to fulfill this responsibility. In light of our findings that the four HHS-approved Pharmacy Plus demonstrations are likely to substantially increase federal Medicaid spending, as previously approved Medicaid section 1115 demonstrations have done; that HHS’s review process and basis for these approvals have not been clearly set forth; and that approved demonstrations are not all meeting evaluation and monitoring requirements, we are making seven recommendations to the Secretary of HHS related to the section 1115 demonstration process. To improve HHS’s process for reviewing and approving states’ budget neutrality proposals for Pharmacy Plus and other Medicaid section 1115 demonstrations, we recommend that the Secretary take three actions: For future demonstrations, clarify criteria for reviewing and approving states’ proposed spending limits. Consider applying these criteria to the four approved Pharmacy Plus demonstrations and reconsider the approval decisions, as appropriate. Document and make public the basis for any section 1115 demonstration approvals, including the basis for the cost and enrollment growth rates used to arrive at the spending limits. To ensure that approved Pharmacy Plus and other Medicaid section 1115 demonstrations fulfill the objectives stated in their evaluation plans, we recommend that the Secretary take two actions: Ensure that states are taking appropriate steps to develop evaluation designs and to implement them by collecting and reporting the specific information needed for a full evaluation of the demonstration objectives. On acceptance, make public the interim and final results of HHS’s independent Pharmacy Plus evaluation. To ensure that the Secretary and other stakeholders have the information needed to monitor approved Pharmacy Plus and other Medicaid section 1115 demonstrations to determine if they are functioning as intended, we recommend that the Secretary take two actions: Ensure that states provide sufficient information in their demonstration progress reports, in a consistent format, to facilitate the department’s monitoring. Ensure that states submit required demonstration progress reports in a timely manner. We provided a draft of this report for comment to HHS and the states of Florida, Illinois, South Carolina, and Wisconsin. HHS and Florida, Illinois, and Wisconsin responded with written comments, which are reproduced in appendixes III through VI, respectively. South Carolina provided technical comments, which we incorporated in our report as appropriate. HHS concurred with five of our recommendations to strengthen the processes for approving and overseeing Pharmacy Plus and other Medicaid section 1115 waivers and disagreed with two. It concurred with our recommendations to make public the basis for section 1115 demonstration approvals and to ensure that Pharmacy Plus and other Medicaid section 1115 demonstrations fulfill the objectives of their evaluation plans by working with the states toward useful program evaluations and making results of the independent Pharmacy Plus evaluation publicly available. HHS also concurred with our recommendation to ensure that adequate information is available to monitor the demonstrations to determine if they are functioning as intended. In this regard, HHS stated that it has provided each state that has implemented a Pharmacy Plus demonstration with an example of an outline and content to be used as a guide for progress reports and that it will make concerted efforts to ensure that states submit the reports in a timely manner. HHS did not concur with our recommendation that the Secretary of HHS clarify criteria for reviewing and approving states’ proposed demonstration spending limits, indicating that although the department recognizes the importance of using criteria for reviewing budget neutrality, strict criteria cannot be determined in advance because states’ circumstances differ. HHS also strongly disagreed with our recommendation that the Secretary consider applying clarified criteria to the four approved Pharmacy Plus demonstrations and reconsider the approval decisions as appropriate. HHS stated that it used criteria to review each of the approved, disapproved, and pending demonstration proposals; believes the four approved demonstrations were based on well- supported budget estimates of future state spending; and does not believe it appropriate to reconsider approved demonstrations before the end of the approval periods. We agree with HHS that some flexibility is appropriate in considering the unique Medicaid section 1115 demonstrations proposed by different states. Consistent with our analyses of other section 1115 demonstration waivers over the past decade, however, we believe HHS’s review process and decision criteria should be clear, and the results—particularly when approved spending limits deviate significantly from limits developed using benchmarks that HHS said it uses as a starting point—should be publicly explained and documented in the demonstrations’ approval letters and terms and conditions. Even though HHS has developed a standard application form for Pharmacy Plus demonstrations, that form and other guidance does not provide written criteria for how HHS reviews and approves the growth rates that states propose. HHS’s rationale for significantly deviating from benchmarks for projecting future program growth in establishing different states’ spending limits has not been documented or made clear to us or to others, including other states that may be seeking approval of demonstration proposals. Without such clarity, questions arise as to how consistently states have been or will be treated in applying for demonstrations. Further, in our view, Pharmacy Plus demonstration approvals were based on questionable savings assumptions. We believe that HHS should establish clear criteria on which to base the spending limits and should reconsider its spending limit decisions for the approved Pharmacy Plus demonstrations in light of such criteria. HHS also commented that it was premature to evaluate the Pharmacy Plus demonstrations given the limited experience from 12 to 18 months of operation. HHS said that were the outcome predetermined, a demonstration would serve no purpose. The agency believes the Pharmacy Plus initiative provides states an opportunity to use a Medicaid demonstration to test if providing drug coverage will prevent the aged and disabled low-income population from becoming Medicaid eligible. HHS noted that the four approved demonstrations together are providing drug coverage to 346,000 seniors who would otherwise be without this important benefit. We agree that it is too early to evaluate the outcomes of the 5-year demonstrations and that section 1115 demonstrations are intended to test new propositions. More needs to be done, however, to ensure that states’ evaluations collect the information needed to determine whether those new propositions are functioning as intended. Four states have Pharmacy Plus demonstrations in place to test such propositions, and substantial federal funding is involved, including costs that were previously paid for by the states themselves. For these reasons, HHS has a responsibility to (1) make fiscally prudent decisions in its approvals, (2) ensure that savings hypotheses have some grounding in experience or research, and (3) ensure that the evaluations are planned and conducted in a way that will produce adequate information regarding the demonstrations’ research hypotheses. We also agree that the demonstrations can provide a valuable benefit to low-income seniors and disabled individuals who might otherwise be without drug coverage. But three of the four states with approved demonstrations had state-funded drug coverage programs in place before implementing their Pharmacy Plus demonstrations, and these state-funded programs became eligible for federal matching funds when the demonstrations were approved. We therefore find HHS’s statement that the demonstrations are providing drug coverage to seniors who would otherwise be without it to be an overstatement. HHS also commented on how MMA may affect the operation of approved Pharmacy Plus demonstrations and the review of pending and new demonstration proposals. HHS stated that seniors covered by the four Pharmacy Plus demonstrations will be able to begin receiving drug coverage under the Medicare Part D program in January 2006, and states will be able to use their own funds to “wrap around” the Medicare benefit to assist other Medicare beneficiaries whose incomes exceed the level for low-income subsidies. At that time, HHS believes there will be less need for Pharmacy Plus demonstrations, given expanded Medicare coverage for prescription drugs, and states operating the demonstrations will need to decide if they want to continue doing so and if their demonstrations can continue to be budget neutral. We have reviewed and incorporated this new information as appropriate. HHS’s written comments appear in appendix III, along with our response to additional comments that HHS provided on the findings in our draft report. The department also provided technical comments, which we considered and incorporated as appropriate. Illinois and Wisconsin officials commented that our draft report overstated the demonstrations’ financial risk to the federal government and was unnecessarily alarming in light of data showing that the demonstrations are operating well within their spending limits. In its comments, Illinois said that it stood by the growth rates it used to develop the spending limit for its Pharmacy Plus demonstration; it further argued for the soundness of its demonstration’s premise—that providing a drug benefit to seniors will keep them healthier than if they had no drug coverage. In its comments, Wisconsin said that the draft report failed to consider the significant benefits its demonstration offers to the federal government and to seniors. We agree that providing a drug benefit to seniors could keep them healthier, and we do not dispute the benefit to seniors of the states’ drug programs started or expanded through Pharmacy Plus demonstrations. The demonstrations were approved, however, on the presumption that the cost of each state’s prescription drug program would be paid for in savings from keeping seniors with little or no previous drug coverage healthy enough that they would not become eligible for full Medicaid benefits. Illinois’ demonstration was approved on this presumption even though most of the beneficiaries were already receiving some prescription drug coverage through the state’s existing state-funded program. We remain concerned that HHS is not maintaining its policy to ensure demonstrations are budget neutral. Illinois also commented that it had taken all necessary steps to conduct its own evaluation and that it had cooperated fully with federal evaluators and HHS officials. Illinois said that although it officially filed its quarterly reports late, it submitted all the detailed data contained in those reports to CMS monthly. We are principally concerned with the extent to which the information that Illinois provided could be used to monitor whether the demonstration was operating as intended. Its one- to two-page quarterly reports, filed late, tallied the number of beneficiaries enrolled in the demonstration and drug expenditures to date and provided a narrative paragraph on accomplishments, problems, or issues. The information itself, however, furnishes little insight as to whether the demonstration is operating as intended or whether the benefit is reducing Medicaid costs. Wisconsin commented that the draft report failed to ascribe any value to the government of Wisconsin’s agreement to cap its federal Medicaid funding for seniors as a condition of Pharmacy Plus demonstration approval. We believe the draft report accurately captured HHS’s approach to limiting the federal liability for the Pharmacy Plus demonstrations by establishing a “cap,” or spending limit, as a condition of approval. We remain neutral on the “value” of this cap for several reasons. Requiring states to abide by a spending limit is a departure from the open-ended entitlement nature of the Medicaid program. We also recognize that under the Medicaid program, states have considerable discretion to alter spending by increasing—or decreasing—coverage for certain populations and services. In addition, we recognize that HHS’s budget neutrality practices provide for flexibility in approach and that HHS has established such a limit on other section 1115 demonstrations before the Pharmacy Plus initiative. Wisconsin also commented that the draft report failed to mention that the demonstrations were reviewed, determined reasonable, and approved by OMB. We recognize that OMB is involved in assessing budget neutrality and other aspects of Pharmacy Plus and mentioned that agency’s role in our draft report. Nevertheless, as OMB officials told us, the authority for section 1115 waiver approval rests with the Secretary of HHS, and responsibility for final Pharmacy Plus approval decisions rests with the Secretary and his designees. Wisconsin further commented that in criticizing CMS for not obtaining better evidence to support projected savings, our report fails to consider that the reason for demonstration projects is precisely to test such propositions. We maintain, however, that when HHS establishes a new initiative to encourage states to apply for Pharmacy Plus demonstrations, it is the agency’s responsibility to ensure that each demonstration’s evaluation objectives are reasonable, each demonstration’s savings assumptions are realistic and grounded in some evidence, and the evaluations are well planned and data monitoring is established early enough to assure that the questions can be answered. Florida commented that its demonstration was predicated upon savings to be achieved over the 5-year life of the program and that its proposed spending limit was close to—less than 1 percent above—the conservative benchmark spending level we calculated. We agree that Florida’s spending limit was relatively close to a limit based on the benchmarks and included that information in the draft report. South Carolina provided technical comments that we incorporated as appropriate. As arranged with your offices, unless you release its contents earlier, we plan no further distribution of this report until 30 days after its date. At that time, we will send copies of this report to the Secretary of Health and Human Services, the Administrator of the Centers for Medicare & Medicaid Services, and others who are interested. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions, please contact me at (202) 512- 7118. Another contact and other major contributors are listed in appendix VII. To achieve budget neutrality, a state’s projected 5-year spending with its Pharmacy Plus demonstration cannot exceed 5-year projected costs without the demonstration. As a result, the projected costs of a state’s existing Medicaid program for seniors effectively sets the spending limit while the demonstration is under way. Calculating this without- demonstration limit (steps 1–5 in fig. 1) starts with a base year, generally the most recent full year for which data are available; calculations for each subsequent year are based on numbers from the previous year. The result limits a state’s Medicaid spending for all services provided to all Medicaid seniors in the state. Calculating projected 5-year with-demonstration costs follows the same steps but, in addition, factors in the estimated number of new beneficiaries receiving only the prescription drug benefit; the costs of providing them the benefit; and the expected savings, mainly from keeping these beneficiaries healthy enough to avoid eligibility for full Medicaid. Appendix II: Denied, Withdrawn, and Pending Pharmacy Plus Demonstration Proposals as of May 2004 Projected enrollment: Individuals with incomes at or below 300 percent of the federal poverty level (FPL). Coverage and cost sharing: Sought federal assistance only for administrative costs for a demonstration to make prescription drugs available at the discounted Medicaid rate plus a dispensing fee. State was to contribute $1 toward the cost of each prescription in the first year, increasing to $8 by the fifth year. Reasons for denial: Exceeded the Pharmacy Plus income limit at or below 200 percent of FPL, provided for only minimal state financial contributions to pharmacists, and did not include the necessary budget neutrality analysis. Projected enrollment: Seniors and adults with disabilities with incomes at or below 200 percent of FPL or, if income is above 200 percent of FPL, with prescription drug expenses exceeding 40 percent of their incomes. Coverage and cost sharing: All prescriptions covered by the Medicaid state plan, up to an annual benefit limit of $2,500. Participants to pay co-payments of $5 or 25 percent of the cost per prescription, whichever is greater. Reason for denial: State already provided drug benefits to the people to be covered under the demonstration. Projected enrollment: Seniors with incomes at or below 188 percent of FPL. Coverage and cost sharing: Same broad prescription drug coverage as state Medicaid plan. State proposed three levels of co-payments (exact amounts not specified): generic drugs, designated brand-name drugs, and all other brand-name drugs. Full cost of prescriptions to be covered after participants reached annual out-of-pocket spending limits: for example, a single person would pay the lesser of $2,000 or 10 percent of gross annual income. Reason for withdrawal: State’s existing pharmacy assistance program for seniors already covered the populations to be included in the demonstration, and without an expansion the state and the Department of Health and Human Services (HHS) could not reach agreement on budget neutrality. Projected enrollment: Seniors and adults with disabilities with incomes up to 300 percent of FPL. Coverage and cost sharing: All prescription drugs and insulin and syringes with specified exceptions, such as cosmetics and antihistamines. Annual registration fee of $25 and co-payments of $12 for those with incomes up to approximately 233 percent of FPL and $20 for those above. State program: Covers low-income seniors and people with disabilities with incomes up to approximately 233 percent of FPL. Demonstration would expand eligibility up to 300 percent of FPL. Projected enrollment: Seniors and adults with disabilities with incomes at or below 200 percent of FPL. Coverage and cost sharing: All prescription drugs covered by state Medicaid plan, with $5 co-payment for each prescription. For brand-name drug when generic is available, $5 co-payment plus cost difference between the two. State program: Covers seniors and adults with disabilities with incomes up to 222 percent of FPL if single and 202 percent if married. Demonstration would cover individuals with incomes at or below 200 percent of FPL. Projected enrollment: Qualified Medicare beneficiaries age 65 or older with incomes at or below 85 percent of FPL. Coverage and cost sharing: Would cover two prescriptions per beneficiary per month. Annual $25 enrollment fee and co-payments of $10 for each generic prescription and $20 for each brand-name drug. State program: No state-funded pharmacy assistance program for seniors at the time of demonstration proposal submission. Projected enrollment: Seniors with incomes at or below 135 percent of FPL. Coverage and cost sharing: Same prescription drugs as the state’s Medicaid program, plus insulin, up to annual benefit caps set on a sliding scale: $1,000 for people with incomes up to 100 percent of FPL; $750 for those with incomes up to 120 percent of FPL; and $500 for those with incomes at or below 135 percent of FPL. Participants would pay 50 percent of the discounted program price, which is the same as the Medicaid price, for each prescription. State program: Existing state-funded pharmacy program for low-income seniors to be covered under the demonstration with no change in eligibility or drug coverage. State indicated that increased enrollment was expected in the demonstration following a change from a mail-in rebate system to a point-of-sale system using a discount card. Projected enrollment: Seniors age 62 or older and adults with disabilities with incomes at or below 185 percent of FPL. Coverage and cost sharing: Prescription drugs for specified conditions with 20 percent co-payment for each prescription, or 10 percent if from mail-order sources. Broader range of drugs available for coverage with 20 percent co-payment after $1,000 out-of- pocket expenses. State program: Demonstration would cover state-funded pharmacy program, expand conditions covered, and add voluntary mail-order purchase. Projected enrollment: Seniors and adults with disabilities or chronic illness, including chronic mental illness, with incomes at or below 200 percent of FPL. Coverage and cost sharing: All prescription drugs covered by state Medicaid plan. Annual $25 enrollment fee (waived for first year of program) and co-payments that increase after participants have incurred $1,800 of drug expenses per year under the program, from $2 to $4 for generics and from $8 to $12 for brand-name drugs with no generic equivalent; other brand-name drugs have a $25 co-payment. State program: The demonstration would cover individuals with incomes at or below 200 percent of FPL from three state-funded pharmacy programs, while individuals in those programs with higher incomes would continue to be state funded. The scope of drugs covered by state programs would be expanded under the demonstration. Projected enrollment: Seniors with incomes at or below 200 percent of FPL. Coverage and cost sharing: All prescription drugs and insulin. Co-payments of $5 for generic and $15 for brand-name drugs; annual benefit limit of $1,000 per participant. State program: Demonstration would cover and expand existing state-funded program by broadening prescription drugs covered from drugs for three specific conditions to those for all conditions, reducing cost sharing, and increasing annual benefit limit from $600 to $1,000. Projected enrollment: Seniors with incomes at or below 200 percent of FPL. Coverage and cost sharing: Most prescription drugs covered by state Medicaid plan, plus insulin and syringes. Annual $25 enrollment fee and coinsurance of 20 percent of cost of each prescription up to a monthly cap on a sliding scale determined by household income. An additional co-payment would be charged for brand-name drugs with generic equivalents. State program: Demonstration would cover existing state-funded pharmacy assistance program with the same eligibility and coverage and expand enrollment. In March 2003, Massachusetts withdrew two separate section 1115 demonstration proposals from review: a Pharmacy Plus demonstration for seniors (the proposal described in this appendix) and a prescription drug benefit for individuals with disabilities as an amendment to the state’s section 1115 Medicaid managed care demonstration. At the same time, Massachusetts submitted a new proposal—not a Pharmacy Plus proposal—to add a drug benefit for certain seniors and disabled individuals as an amendment to its existing managed care demonstration. In August 2003, that proposal was also withdrawn. In addition to indicating whether it concurred with our seven recommendations, HHS commented on the report draft’s findings in three areas. HHS disagreed with our conclusion that the four approved Pharmacy Plus demonstrations will not prove to be budget neutral to the Medicaid program and will possibly result in increased federal Medicaid spending. HHS stated that the department takes seriously its responsibility to ensure budget neutrality in the Medicaid demonstrations it approves, noting that it approved four Pharmacy Plus demonstrations while denying two and reviewing but not approving nine other proposals whose budget estimates were not well supported. HHS was concerned that we missed the fundamental purpose of budget neutrality, which HHS says is not to hold states to a formula-driven cap but to estimate the amount of future Medicaid spending. HHS believes that the four approved demonstrations’ spending limits were based on well- supported budget estimates of future state spending,1 and said its policy has never been to hold states to benchmark levels of growth. Those benchmarks are, in HHS’s view, a starting point in projecting how the program will grow, because HHS typically permits states to present rationales for higher growth rates. HHS also stated that the 0.7 percent per year state historical average enrollment growth rate we cite for South Carolina (table 2) is in error, because its records showed that South Carolina’s historical average for enrollment growth was 1.0 percent. In verifying the state’s historical enrollment rate, we noted that the rate had been “rounded up” to the next full percentage from the 0.7 percent actual historical rate. For consistency with other rates in the table, we did not round it. officials and requested all documents that were considered in their budget neutrality negotiations. Those interviews and documents, which we discussed in the draft report, did not fully support the higher growth rates that were approved. We note that enrollment growth rates, in particular, can have a significant multiplier effect on future spending estimates. Further, we note that HHS allowed at least one state to argue for a higher growth rate using broad justifications—such as the effect of the Social Security Administration’s nationwide outreach program for low-income Medicare beneficiaries—that other states could also have used but did not, raising questions of clarity and consistency in both the process and the final decisions. Documentation of HHS’s approval decisions and the basis for approved spending limits could provide a rationale for higher cost and enrollment growth rates and offer guidance and assurance of consistent treatment to other states applying for Pharmacy Plus demonstrations. Absent such documentation, neither HHS nor the states have adequately justified the departures from states’ historical growth rates or the CMS Actuary’s growth projections in establishing states’ spending limits. In its comments, HHS stated that the federal review process for Pharmacy Plus demonstration proposals is similar to the review process for other Medicaid section 1115 demonstrations, indicating that the process is necessarily interactive and involves numerous meetings within the federal team and with states. We acknowledge that the review process for Medicaid section 1115 demonstration proposals benefits from being inclusive and interactive, and we are not suggesting that HHS should establish a new or different review process specifically for the Pharmacy Plus demonstrations. Our concern is that the basis for its decisions and any agreed-upon spending limit be clear and justified, not only for Pharmacy Plus demonstrations but for all section 1115 approvals. As noted in the draft report, the concerns raised by HHS’s approved Pharmacy Plus demonstrations parallel those we have raised about other section 1115 waiver demonstration approvals over the past decade, including concerns about the extent to which the department is protecting the Medicaid program’s fiscal integrity and the need for clear criteria and a public process in reviewing and approving demonstrations. HHS commented that the department plans to continue working with states toward developing useful program evaluations based on consistent data collection as well as sufficient, consistent, and timely monitoring information. HHS also plans to make results of the independent Pharmacy Plus evaluation available on the CMS Web site. With regard to states’ own evaluations, HHS emphasized practical limitations, such as constraints on state financial and staff resources, indicating that while states ideally would develop evaluation plans before implementing demonstrations, in practice such plans often change. HHS commented that it obtains sufficient information for monitoring the demonstrations through telephone contacts and progress reports that respond to an example outline the department provided to each demonstration state. We recognize that state resources are limited, demonstration implementation tends to be a higher priority than evaluation, and the independent contractor evaluation of Pharmacy Plus will provide substantial information. Nonetheless, the lack of action to monitor key information—such as whether demonstration enrollees are being diverted from Medicaid—to plan how their evaluations will be conducted, or to collect data needed for such evaluations suggests a low priority for ensuring that evaluations can and will be done. HHS needs to ensure that states provide sufficient, consistent, and timely information for both demonstration monitoring and for determining whether the demonstrations are functioning as intended and to ensure that evaluation plans are put into place. In addition to overall comments on our draft report contained in its letter and discussed in the body of this report, Wisconsin provided 11 specific comments in an attachment to its letter, which is reproduced on pages 67 through 69. Our responses to Wisconsin’s specific comments are numbered below to correspond with each of the state’s numbered comments. 1. Wisconsin commented that our $416 million figure (the estimated federal share of the difference between HHS-approved and benchmark 5-year spending limits in table 3) exaggerates the federal fiscal effect, because the actual costs of the demonstration’s first years have come in under the projected costs. The state currently projects federal costs for the new drug benefit under the demonstration totaling $250 million over 5 years instead of roughly $537 million, which is the federal share of $919 million approved for the new benefit (see table 1). Although we recognize that the actual costs of Wisconsin’s demonstration to date are less than the costs projected at the time the waiver was approved, our analysis examined the extent to which HHS ensured that the demonstrations—in the form they were approved—maintained spending limits that were budget neutral to the federal government. Because Wisconsin’s approved spending limit represents the total amount the state is authorized to spend over the demonstration’s 5- year life-span, the federal government could be liable for as much as $416 million more than what it would have been liable for had HHS held the state to a spending limit based on benchmark rates (see table 3). 2. Wisconsin commented that it is unreasonable to hold HHS to applying the lower of two benchmark growth rates in calculating budget neutrality: state experience or projections by the Centers for Medicare & Medicaid Services’ (CMS) Actuary for Medicaid costs. The state also expressed concern that our analysis did not incorporate factors other than the benchmarks that affect program growth. We believe that it is reasonable to expect HHS to use objective benchmark growth rates in projecting the Medicaid costs on which it bases spending limits and to document its reasons for deviating from those benchmarks—even if the department regards them as starting points. Otherwise, the department’s rationale for setting higher spending limits (based on higher growth rates) for some states than for others is not apparent to other states involved in waiver negotiations and reviews. As noted in the draft report, HHS responded to Wisconsin’s request for higher growth rates but did not, in our view, adequately document the basis for approving higher rates. In our own analysis of the spending limits, we did not include the additional factors that Wisconsin asserted should raise its spending limits because neither the state nor HHS provided adequate support to justify doing so. 3. Wisconsin stated that our interpretation was unreasonably narrow in not accounting for potential savings accruing to Medicare, as well as to Medicaid, from expanding prescription drug coverage for seniors. We considered savings to Medicaid alone because HHS allows states to include savings only to Medicaid, not Medicare, in determining whether their Medicaid demonstrations are budget neutral. 4. Wisconsin commented that because its historical cost growth rate has been rising, it was appropriate for HHS to calculate the state’s spending limit using a rate higher than its historical 5-year average. We believe that whenever HHS allows growth rate projections that exceed its benchmarks, it should document the basis for this deviation. 5. Wisconsin mentioned two state programs that it believes will, like the Social Security Administration’s outreach program for low-income Medicare beneficiaries, help increase senior enrollment in Wisconsin’s Medicaid program because they are likely to identify individuals who qualify for full Medicaid benefits. But the state did not quantify or provide any data or other evidence to show the potential effects of these programs or of Social Security Administration outreach. We did not include these effects in our benchmark analysis for the same reason we did not include other factors that Wisconsin believed should raise is spending limit (see our response to comment 2). 6. Wisconsin noted that we found no firm evidence to support the idea that expanding drug coverage would produce significant savings in Medicaid by diverting or delaying Medicaid enrollment. The state asserts that this criticism ignores the central purpose of these demonstrations: to determine if an important health care benefit can be delivered cost effectively. We acknowledge the value of demonstrations to test health care alternatives, but we believe that the case for substantial savings to Medicaid due to expanded prescription drug coverage is not well supported. We also believe that HHS has not done enough to ensure that states develop and implement demonstration evaluation designs. Although we do not dispute Wisconsin’s comment that research suggests coverage of prescription drugs benefits seniors, we believe that demonstrating the effects of drug coverage on avoiding Medicaid enrollment is a separate issue. 7. Wisconsin has interpreted our mention of congressional concern about the extent to which HHS has ensured that section 1115 demonstration waivers promote the goals of Medicaid as implying that the state’s demonstration is not providing critical prescription drugs to a vulnerable elderly, low-income, uninsured population. We did not intend to suggest that Wisconsin’s demonstration is not a valuable benefit to these individuals. We were referring to our earlier work on section 1115 Medicaid and State Children’s Health Insurance Program (SCHIP) demonstrations, which, in addition to raising concerns about HHS’s use of section 1115 waiver authority to approve demonstration spending limits that were not budget neutral, also found that HHS was allowing states to use unspent SCHIP funding to cover childless adults, despite SCHIP’s statutory objective of expanding health coverage to low-income children. 8. Wisconsin commented that we mischaracterized the growth rates approved by HHS for the state’s demonstration as too high. See our response to comment 2. 9. Wisconsin objected to our conclusion that the lack of available information on how these demonstrations are operating compromises attempts to assess whether they are operating as intended. This statement does not apply to any one state alone but synthesizes our findings for the four approved demonstrations taken together. We acknowledge that Wisconsin has been responsive to HHS’s requirements for informative and timely progress reports and have revised our report as appropriate. 10. Wisconsin stated that its data reporting system allows its staff to monitor that the demonstration is operating as intended. In the draft report, we noted that Wisconsin officials reported having the capability for monitoring. We have not assessed Wisconsin’s monitoring system. 11. Wisconsin commented on the importance of, and Wisconsin’s full participation in, CMS’s contracted independent evaluation as an effective approach to reviewing the agency’s assumptions relating to budget neutrality and program effectiveness. We believe the draft report captured the plans for this independent evaluation, as well as the apparent confusion over each state’s responsibility for conducting its own evaluation. We have revised our report to reflect that Wisconsin officials believe the state is not required to conduct an evaluation, whereas HHS officials told us the state would be required to do so. In addition, Tim Bushfield, Ellen W. Chu, Helen Desaulniers, Behn Kelly, Suzanne Rubins, Ellen M. Smith, and Stan Stenersen made key contributions to this report. SCHIP: HHS Continues to Approve Waivers That Are Inconsistent with Program Goals. GAO-04-166R. Washington, D.C.: January 5, 2004. Medicaid and SCHIP: Recent HHS Approvals of Demonstration Waiver Projects Raise Concerns. GAO-02-817. Washington, D.C.: July 12, 2002. Medicare and Medicaid: Implementing State Demonstrations for Dual Eligibles Has Proven Challenging. GAO/HEHS-00-94. Washington, D.C.: August 18, 2000. Medicaid Section 1115 Waivers: Flexible Approach to Approving Demonstrations Could Increase Federal Costs. GAO/HEHS-96-44. Washington, D.C.: November 8, 1995. Medicaid: State Flexibility in Implementing Managed Care Programs Requires Appropriate Oversight. GAO/T-HEHS-95-206. Washington, D.C.: July 12, 1995. Medicaid: Statewide Section 1115 Demonstrations’ Impact on Eligibility, Service Delivery, and Program Cost. GAO/T-HEHS-95-182. Washington, D.C.: June 21, 1995. Medicaid: Spending Pressures Drive States toward Program Reinvention. GAO/T-HEHS-95-129. Washington, D.C.: April 4, 1995. Medicaid: Spending Pressures Drive States toward Program Reinvention. GAO/HEHS-95-122. Washington, D.C.: April 4, 1995. Medicaid: Experience with State Waivers to Promote Cost Control and Access to Care. GAO/T-HEHS-95-115. Washington, D.C.: March 23, 1995.
Under section 1115 of the Social Security Act, the Secretary of Health and Human Services may waive certain Medicaid requirements for states seeking to deliver services through demonstration projects. By policy, these demonstrations must not increase federal spending. GAO has previously reported concerns with HHS's approval process. GAO was asked to provide information on a new Medicaid section 1115 demonstration initiative called Pharmacy Plus, intended to allow states to cover prescription drugs for seniors not otherwise eligible for Medicaid. GAO reviewed the (1) approval status of state proposals, (2) extent to which HHS ensured that demonstrations are budget neutral, (3) basis for savings assumptions, and (4) federal and state steps to evaluate and monitor the demonstrations. From January 2002 through May 2004, HHS reviewed Pharmacy Plus proposals from 15 states and approved four: Florida, Illinois, South Carolina, and Wisconsin. These demonstrations offer prescription drug coverage to low-income seniors not otherwise eligible for Medicaid. HHS denied proposals from Delaware and Hawaii as inconsistent with demonstration guidelines; most of the rest were not under active review because HHS had not determined how new Medicare prescription drug legislation will affect proposed or operating Pharmacy Plus demonstrations. Over 5 years, the four approved demonstrations will provide prescription drug coverage to half a million low-income people age 65 or older, at a projected cost of about $3.6 billion, of which the federal share would be about $2.1 billion. HHS has not adequately ensured that the four approved demonstrations will be budget neutral, that is, that the federal government will not spend more with the demonstrations than without them. HHS approved the demonstrations' 5-year spending limits using projections of cost and beneficiary enrollment growth that exceeded benchmarks that HHS said it considered in assessing budget neutrality, specifically, states' recent average growth rates and projections for Medicaid program growth nationwide. Neither HHS's negotiations with the states nor its rationale for approving higher growth rates is documented. Using the benchmark growth rates, GAO estimates that none of the four demonstrations will be budget neutral and federal spending may increase significantly, for example, by more than $1 billion in Illinois and $416 million in Wisconsin over 5 years. Unrealistic savings assumptions also contribute to demonstration spending limits that are not likely to be budget neutral. States assumed that keeping low-income seniors healthy--thus preventing them from spending down their financial resources on health services and "diverting" them from Medicaid eligibility--would generate sufficient savings to offset the increased costs of providing a new drug benefit. GAO found neither state experience nor other research to support such savings. Without state-specific evidence, HHS approved savings assumptions for the four states ranging from $480 million to $2 billion per state over 5 years. Had more conservative assumptions been used to estimate demonstration savings, the proposals likely could not have been approved as budget neutral. Efforts by the states and HHS to evaluate and monitor the Pharmacy Plus demonstrations are in their early stages. The four states have taken few steps to put their own required evaluation plans into practice, and an independent evaluation contracted by HHS and started in October 2002 is scheduled to report in September 2005. In the interim, HHS has not ensured that all states meet requirements for progress reporting on the demonstrations. The information that states have submitted is often insufficient for determining whether the demonstrations are operating as intended, and this shortcoming will limit HHS's oversight capability.
The enactment of the Personal Responsibility and Work Opportunity Reconciliation Act of 1996 dramatically altered the nation’s system to provide assistance to low-income families with children. The act replaced the existing entitlement program with fixed block grants to the states to provide Temporary Assistance for Needy Families (TANF). TANF provides about $16.5 billion annually for the states to use for families to become self- sufficient, imposes work requirements for adults, and establishes time limits on the receipt of federal assistance. Without adequate transportation, however, TANF recipients and other low-income individuals face significant barriers in finding and keeping jobs. Evidence from metropolitan areas, such as Atlanta, Boston, and Cleveland, shows that TANF recipients disproportionately live in inner-city neighborhoods, far from entry-level employment opportunities located in the suburbs. Although poverty has declined in central cities, urban poverty rates were still twice as high as suburban poverty rates in 1999 (approximately 16 percent versus 8 percent). In addition, available jobs may not be located near central cities. For instance, one study in 2001 found that in Atlanta, Chicago, Detroit, and a number of other metropolitan areas, more than 60 percent of the regional employment was located more than 10 miles from the city center. Similarly, the TEA-21 legislation noted that even in metropolitan areas with excellent public transportation systems, less than one-half of the jobs were accessible by transit. This spatial mismatch between low-income individuals and the locations of jobs or other employment-related services may hinder those individuals’ ability to both find and keep jobs. These challenges are especially acute for low-income individuals who do not own cars and for those who generally drive long distances in poorly maintained cars. Data from the 2001 National Household Travel Survey indicated that 26.5 percent of households that earn less than $20,000 do not own a car, as compared with 1.2 percent of households with incomes over $75,000. Lack of adequate modes of transportation makes it difficult to make multiple trips each day to accommodate child care and other domestic responsibilities and employment-related services. As we reported in 2004, many rural TANF recipients also cannot afford to own and operate a reliable private vehicle, and public transportation to get to and from training, services, and work is often not available. In addition, several caseworkers and service providers in rural areas identified the lack of valid driver’s licenses as a problem for many of their clients. A study from the Journal of the Transportation Research Board has shown that access to jobs and job-related opportunities, on the other hand, increases the employment and earnings of TANF recipients and reduces TANF-use rates. The JARC program was created in 1998 to support the nation’s welfare reform goals by filling gaps in transportation services. JARC funds can be used to expand existing public transit routes or service hours, among other things (see sidebars). However, JARC projects are not limited to mass transit services; some JARC projects include ridesharing activities and the promotion of transit voucher programs. DOT’s two major goals for the JARC program are to (1) provide transportation and related services to urban, suburban, and rural areas to assist low-income individuals, including welfare recipients, with access to employment and related services, such as child care and training, and (2) increase collaboration among transportation providers, human service agencies, employers, and others in planning, funding, and delivering those services. Citibus us JARC fnd to subsidize it fixed-rote bus ervice nd evening ervice in the city of Lubbock. According to Citibus, the evening ervice i demnd-repone, red ride, c-to-c ervice for the generl public etween 6:40 p.m. nd 10:20 p.m., Mondy throgh Sardy. The fre i $4 per trip or $75 for 25-ride pass. The evening ervice i deigned to meet the need of passenger who re trit dependent nd who wold hve no other me of trporttion in the evening if the evening ervice were not provided. Citibus o note thjority of evening ervice passenger work t night nd use the ervice for trporttion to nd from jo ite. done for JARC. Also, the statutory matching requirement for JARC was inconsistent with other FTA programs because JARC projects could receive grants for up to 50 percent of the project’s capital expenses, rather than 80 percent. While we have reported that FTA had met its JARC program goal of improving collaboration between grantees and stakeholders, we also have reported that more collaboration is needed at the federal level to enable grantees to obtain federal funding for JARC projects. TEA-21 required FTA to report to Congress on the results of an evaluation of JARC; however, FTA has struggled to develop comprehensive performance measures that assess a national program when individual programs, operations, and features vary. SAFETEA-LU made a number of changes to the JARC program, the most notable of which was the creation of a formula to distribute JARC funds beginning with fiscal year 2006. Whereas in recent years, JARC projects were competitively selected by FTA or congressionally designated for funding, SAFETEA-LU created a formula to distribute funds to states and large urbanized areas. This change is significant because some states and large urbanized areas will receive substantially more funds than under the discretionary program, while others will receive substantially less. In addition, the formula program will result in some areas receiving JARC funds that had not received them in the past. Other JARC changes resulting from SAFETEA-LU include (1) the need for states and large urbanized areas to designate a recipient for JARC funds, competitively select projects for funding, and certify that selected projects came from a locally developed coordinated plan and (2) the ability to use a portion of JARC funds for planning activities. Table 1 compares key JARC provisions under SAFETEA-LU and TEA-21. A key SAFETEA-LU change to the JARC program was the creation of a formula to distribute JARC funds. Under TEA-21, JARC was a discretionary grant program for which FTA competitively selected JARC projects and, more recently, awarded funds for congressionally designated projects. Under SAFETEA-LU, states and large urbanized areas have been apportioned funding for JARC projects through a formula that is based on the relative number of low-income individuals and welfare recipients in each area. Forty percent of JARC funds each year is required to be apportioned among states for projects in small urbanized and other-than- urbanized areas, and the remaining 60 percent is required to be apportioned among urbanized areas with a population of 200,000 or more. For fiscal year 2006, the allocation was as follows: nonurbanized areas - $27.3 million, small urbanized areas - $27.3 million, and large urbanized areas - $82.0 million. The change to a formula program is significant because some states and urbanized areas will receive substantially more funds than they received under the discretionary program, while others will receive substantially less (see fig. 1). In 22 states, the total amount of JARC funding available decreased from fiscal years 2005 to 2006, when the formula-based program began. The percentage decrease in funding for these 22 states ranged from 33 percent to 88 percent. For example: Alaska’s funding decreased approximately 88 percent, from $1.7 million in fiscal year 2005 to $207,503 in fiscal year 2006. Vermont also saw its JARC apportionments decrease more than 80 percent, from $991,182 to $186,885. The total amount of JARC funding available for 2 states (Michigan and West Virginia) remained approximately the same over the 2 fiscal years, while in 13 states, the total funding increased. The percentage increase for these 13 states ranged from 17 percent to 2,931 percent. Florida, for instance, had its JARC funds increased by more than 1,200 percent, from $594,708 in fiscal year 2005 to $8.3 million in fiscal year 2006. Virginia experienced the greatest percentage increase—more than 2,900 percent—from $84,249 in fiscal year 2005 to $2.5 million in fiscal year 2006. Eighteen states were allocated fiscal year 2006 JARC funds that had not received JARC funds for fiscal year 2005. These states represent approximately 16 percent of the total JARC funding for fiscal year 2006. (App. II lists the dollar amount of the fiscal year 2006 apportionments for all of the states and large urbanized areas.) Large urbanized areas also saw substantial changes to their JARC funding as a result of formularization. Of the 11 large urbanized areas we interviewed that had received prior JARC grants, 1 saw its JARC funding increase 64 percent between fiscal years 2005 and 2006, 5 had their funds decrease from 3 percent to 88 percent between fiscal years 2005 and 2006, and 5 had received JARC grants in the past but not in fiscal year 2005. For example, Tampa/St. Petersburg was apportioned $978,029 in fiscal year 2006, a 64 percent increase from its two fiscal year 2005 grants that totaled $594,708. By contrast, Jefferson County in the Birmingham, Alabama, area had received a JARC grant for almost $3 million in fiscal year 2005, whereas the urbanized area was apportioned $356,107 for fiscal year 2006, a decrease of 88 percent. In addition, the formula program will result in some states and areas receiving JARC funds that had not received them in the past. Eighteen states received JARC funds in fiscal year 2006 that did not receive them in fiscal year 2005. For example, Wyoming, which has not received JARC funds before, was apportioned $202,360 for fiscal year 2006 as a result of the formula. An official from the Wyoming Department of Transportation told us that the state will be able to use the funding to provide vanpool and bus services to the new employment opportunities created by the state’s natural gas and mining operations, many of which are located in areas without public transportation. Puerto Rico, also new to JARC, was apportioned $6.6 million under the formula. Many large urbanized areas, such as Fresno, California, will also be receiving JARC apportionments for the first time. Officials from the industry associations and the 29 state and local agencies that we interviewed had mixed reactions to this change. Some of these state and local agencies said the change from a discretionary to a formula program would result in a more equitable distribution of funds or that formula funding would provide a more consistent source of funding than congressional designation. Some of the 29 agencies said that they would likely add or expand transportation services in their area, and a few thought that formularization would result in improved coordination among transportation and human service agencies. By contrast, some of the state and local agencies we interviewed said that the change to a formula program and the associated program requirements they would need to fulfill would increase the administrative burden on their agency, with 3 of these agencies noting that the additional burden might outweigh the benefits of the program. Other agencies said that the change to a formula program would result in a loss of funds to their state or area, while 1 agency and 1 industry association said the change would spread an already small amount of money even thinner. Several agencies also said that they might have to reduce or eliminate services as a result. Still other agencies said that the change to a formula program would have little or no impact on transportation services in their area. Some indicated that the impact would vary by location, while a few other agencies and 1 industry association noted that it is too soon to know the impact. In addition to creating a formula for distributing JARC funds, SAFETEA-LU also requires states and large urbanized areas to fulfill the following three key requirements before applying to FTA to receive their apportioned JARC funding: (1) identify a designated recipient for JARC funds, (2) conduct a competitive process to select projects for funding, and (3) certify that JARC projects were derived from a coordinated public transit-human services transportation plan (see fig. 2). Under SAFETEA-LU, the governor of each state must designate a recipient for JARC funds at the state level to competitively select and award funds for projects in small urban and other-than-urbanized areas within the state. In large urbanized areas, the recipient must be jointly designated by the governor, local officials, and publicly owned operators of public transportation. These designated recipients will then solicit applications and develop and conduct a competitive process for selecting projects for funding. SAFETEA-LU also extended a JARC coordinated planning requirement to additional FTA programs. In the past, JARC projects were required to be part of a coordinated public transit-human services transportation plan; a similar requirement is included in SAFETEA-LU. However, this requirement will apply in fiscal year 2007 to two other FTA programs that provide funding for transportation-disadvantaged populations. In addition, recipients in states and urbanized areas that select JARC projects must now certify that their selections were based on this plan. SAFETEA-LU made a number of other changes to the JARC program, several of which address issues that we have raised in past reports on JARC and the coordination of transportation services for transportation- disadvantaged populations. One such change is the ability of a recipient to use up to 10 percent of its JARC allocation for administration, planning, and technical assistance. SAFETEA-LU also expanded the definition of eligible activities to include planning as well as capital and operating activities. In 2004, we reported that a majority of the JARC grantees we interviewed supported this proposed change because planning activities could increase coordination with potential partners. We also reported in 2003 that the overall costs of coordination, which can include additional staff members and staff time needed for maintaining and overseeing coordination efforts, can be significant. According to FTA, the 10 percent of JARC funds that will now be available for administration, planning, and technical assistance can be used for coordination activities, which can help state and local agencies improve services and achieve cost savings. SAFETEA-LU also increased the federal government’s share of capital costs and removed a restriction on the amount of funding available for reverse commute projects to help individuals gain access to suburban employment opportunities. In 2004, we reported that the change in the matching fund requirement for JARC would make that program consistent with the matching requirements for other FTA programs. Under TEA-21, projects could receive a grant for up to 50 percent of the project’s capital expenses, which are used to purchase capital equipment such as buses. Grantees will now be able to receive a grant for up to 80 percent of the project’s capital expenses. FTA officials had told us that this change would lessen any confusion about matching requirements among grant recipients who participate in multiple FTA programs. FTA has been developing guidance to help JARC recipients implement changes to the program resulting from the enactment of SAFETEA-LU, but delays in releasing final guidance will reduce the window of availability of fiscal year 2006 funding. To formulate JARC guidance, FTA has been using an extensive public participation process, including notices, commenting periods, listening sessions, and focus groups. This strategy has provided FTA with an abundance of feedback, and the agency has incorporated these comments into its September proposed final guidance. However, an extension of the public comment period and the volume of public input have also contributed to delays in issuing guidance, which meant that FTA was not able to release final program guidance prior to the beginning of fiscal year 2007. Given that FTA allows 3 years to obligate fiscal year 2006 funds, this delay results in 1 less year for states and urbanized areas to obligate JARC funding. As required by SAFETEA-LU, FTA has used an extensive notice and comment process to gain public input to formulate guidance for the JARC program. In November 2005, FTA published a notice of changes to JARC and other relevant programs. This notice provided information on changes to the JARC program and solicited public comment on aspects of the program, such as technical assistance needs and the coordinated planning requirements. In addition, FTA held five public listening sessions across the country on a number of programs, including JARC, to obtain comments and input on the issues that should be addressed in future guidance. The agency also convened a focus group to discuss possible changes to the implementation of JARC. In March 2006, drawing on information received in comments and the listening sessions, FTA released interim JARC guidance for fiscal year 2006 and requested comments on its proposed implementation strategies. When the interim guidance and proposed strategies was released, it generated many questions and concerns among stakeholders. FTA received more than 200 comments on its March interim guidance and proposed strategies from state and local departments of transportation, metropolitan planning organizations, private transportation service providers, interest groups, and other JARC stakeholders. FTA officials reviewed this feedback and addressed many of the stakeholders’ issues in the proposed final guidance for JARC, which was released in September. We will discuss these comments in more detail later in this report, and appendix III provides a summary of these comments. FTA has been incorporating stakeholder concerns into its formulation of guidance, but the volume of this input has contributed to delays. FTA officials originally stated that they planned to issue proposed final guidance in the early summer of 2006. However, FTA extended the comment period for the March 2006 interim guidance and proposed strategies from April 21 to May 22 to accommodate additional comments, and more than 100 comments were submitted on or after the last day of the comment period. Because of these additional comments, FTA officials later told us that they expected to issue the proposed final guidance in late July or early August. FTA ultimately issued the proposed final guidance on September 6, 2006 (see fig. 3). Public comments were accepted for 60 days following the release of the September proposed final guidance, after which FTA began reviewing the comments to inform its final guidance. Consequently, FTA was not able to release its final guidance prior to the start of the 2007 fiscal year in October. FTA officials said that they currently plan to release final program guidance in March 2007. FTA’s issuance of final guidance for JARC has been delayed, and this may reduce the time available to projects to access fiscal year 2006 funding. FTA officials noted that although the notice and comment process has affected the timeliness of the program guidance, they feel that it has enriched the development of guidance. However, the delays associated with taking this approach have reduced the time between issuing the final guidance on how to apply for fiscal year 2006 funds and the deadline for obligating these funds by the end of fiscal year 2008. A number of states and large urbanized areas have proceeded to implement JARC’s requirements using the interim guidance and proposed strategies. Nineteen of the 29 state and local agencies we interviewed in the summer of 2006 were proceeding with the implementation of JARC in the absence of proposed final guidance. Many of these agencies are required to comply with local and state planning and budget schedules, which have compelled them to move ahead with JARC implementation. FTA officials told us that they encouraged states and urbanized areas to begin implementing changes to the JARC program on the basis of the March interim guidance and proposed strategies, and that FTA is accepting applications for funding prior to issuance of final guidance. In addition, FTA’s March 2006 interim guidance and proposed strategies included a “hold harmless” provision stating that the final guidance requirements would not apply retroactively to grants awarded prior to the issuance of the final guidance. FTA later extended this “hold harmless” provision to grant applications submitted in fiscal year 2007 on the basis of coordinated planning or competitive selection processes that were substantially complete before the issuance of final guidance. Even if the delay in issuing the final guidance does not affect the efforts already under way, states and large urbanized areas will need to keep the remaining window of time in mind, or their ability to secure fiscal year 2006 funding allocated to them could be affected. Through the guidance, FTA implemented a 3-year period to obligate JARC funds for a given fiscal year (the fiscal year of apportionment plus an additional 2 years). Under this view, the availability of fiscal year 2006 funding would expire at the end of fiscal year 2008, and those agencies that chose to wait for the final guidance to be released before applying for fiscal year 2006 JARC funds would have only 2 years in which to obligate those funds. A number of state and local agencies we interviewed indicated that they are waiting on FTA’s final program guidance before moving forward to program implementation. While these areas will benefit from having the final guidance before they submit their JARC applications, given that the guidance was not available by the beginning of fiscal year 2007, they will have less time available to obligate fiscal year 2006 funds. States and large urbanized areas that were apportioned JARC funds have generally begun to implement requirements to receive this funding. As they have done so, they have encountered challenges, most of which FTA has taken steps to alleviate. To date, few states and large urbanized areas have fulfilled the necessary SAFETEA-LU requirements to receive fiscal year 2006 JARC funds, but most states and large urbanized areas we contacted reported that they are in the process of fulfilling these requirements. Officials we interviewed as well as other program stakeholders have encountered several challenges in program implementation, such as questions regarding the selection of the designated recipient in large urbanized areas. FTA responded to most of these issues in its September 2006 proposed final guidance. As we previously noted, states and large urbanized areas must fulfill three SAFETEA-LU requirements prior to applying to FTA to receive JARC funds to award for projects: identify a designated recipient for JARC funds, conduct a competitive selection process, and certify that JARC projects were derived from a coordinated public transit-human services transportation plan. To date, few states and urbanized areas have fulfilled these requirements and received fiscal year 2006 JARC funding. Nationwide, 3 states and 9 of the 152 large urbanized areas that were apportioned JARC funding had received fiscal year 2006 funds as of the end of fiscal year 2006. These obligated funds constitute less than 4 percent of the total fiscal year 2006 JARC funding apportioned to states and large urbanized areas. While few states and large urbanized areas have fulfilled the requirements to receive JARC funds, officials in most of the 12 states and 12 large urbanized areas we contacted in June, July, and August 2006 reported that they have begun to implement these requirements to receive funding. Specifically: Identifying the JARC designated recipient. Officials in each of the 12 states we contacted reported that the state had determined its designated recipient for JARC. In 7 of these states, officials reported that the governor had signed a letter to formally designate the recipient, as required by SAFETEA-LU, although not all of these states had submitted the letter to FTA. The other 5 states reported that their formal designation was in-progress. Officials in 9 of the 12 large urbanized areas we contacted also reported that the area had determined which agency would serve as the designated recipient for JARC funds, although none had submitted a designation letter to FTA at the time of our interviews. There is some variety in the agencies that will serve as the designated recipient in the large urbanized areas we contacted. A metropolitan planning organization will be the designated recipient in 4 of the areas we contacted, while a transit agency will be the designated recipient in the other 5 areas. The other 3 areas had not yet decided on the likely designated recipient. Developing coordinated plans. Almost all of the states and large urbanized areas we contacted had taken actions related to the establishment of locally developed coordinated public transit-human services transportation plans. SAFETEA-LU requires states and urbanized areas to certify that they derived JARC projects from these plans. In 11 of the 12 large urbanized areas we contacted, officials reported that they either had determined their strategy for meeting the coordinated plan requirement or had initiated a coordinated planning process. In addition, officials in all 12 states we contacted reported that the state will be involved in coordinated planning for the JARC program, although the extent of their participation varied. For example, one state official we interviewed reported that his agency will lead the coordinated planning process for small urbanized and rural areas within the state, while another state official reported that rural areas will be responsible for developing plans while the state provides assistance on a case-by-case basis. In a majority of the states and large urbanized areas we contacted, officials anticipated completing these plans in early- to mid-2007. While FTA has allowed states and large urbanized areas to apply for up to 10 percent of their apportionment for administration, planning, and technical assistance prior to applying for funding for project implementation, only 1 of the states and 1 large urbanized area we contacted had received this funding, and another large urbanized area we contacted was in the process of applying for the funding. Reasons that officials we interviewed cited for not applying for this funding included the intention to wait until fiscal year 2007 to use the funding, and the use of other funding sources for these activities. Conducting a competitive selection process. Few states and large urbanized areas we contacted had conducted a competitive selection process to award fiscal year 2006 JARC funds. Officials in 2 large urbanized areas reported that they had conducted a competitive selection process to award fiscal year 2006 funds. In addition, 3 states we contacted had competitively selected JARC projects, but at the time of our interviews, none had yet applied to FTA for the state’s fiscal year 2006 funding to award for project implementation. Officials in a majority of the remaining states and large urbanized areas anticipated competitively selecting projects in early- to mid-2007. More than half of the states and large urbanized areas we contacted reported that they considered or may consider a project’s prior receipt of JARC funding to some extent in selecting projects for funding. For example, officials from 2 metropolitan planning organizations we interviewed noted that they would consider a project’s prior receipt of JARC funds to continue successful projects. Other criteria that officials anticipated they would consider in selecting projects included the capacity of the organization to administer the funds, whether the project had matching funds, and how the project would address the needs of the community. In comments submitted on FTA’s March interim guidance and proposed strategies and in interviews with selected state and local officials, program stakeholders expressed several implementation challenges they had encountered or concerns they had as the program moves forward. These issues included questions regarding the designated recipient in large urbanized areas, and challenges in ensuring stakeholder participation and adequate resources for the development of coordinated public transit- human services transportation plans. FTA responded to many of these issues in its proposed final guidance, which it issued in September 2006. Table 2 below summarizes stakeholders’ key implementation challenges and concerns and FTA’s actions to respond to these issues. Selected urbanized area officials we interviewed and stakeholder comments on FTA’s interim JARC guidance raised several questions and issues regarding the designated recipient in large urbanized areas. For example, transit agency officials we interviewed in 2 large urbanized areas were under the impression that their agency’s status as the designated recipient for FTA’s Urbanized Area Formula program (Section 5307 program) automatically made the agency the JARC designated recipient. In addition, two comments on FTA’s March 2006 interim guidance and proposed strategies noted the stakeholders’ belief that SAFETEA-LU identified existing Section 5307 designated recipients as the intended JARC designated recipients. FTA officials acknowledged that on the basis of the interim guidance and proposed strategies, there was some confusion about the process to designate the JARC recipient. To clarify this issue, the preamble of the September proposed final guidance notes that in large urbanized areas, a new designation letter shall be issued for the JARC program, regardless of whether the designated recipient is the same or different than the Section 5307 designated recipient. Officials we interviewed in large urbanized areas and several stakeholder comments on FTA’s interim guidance and proposed strategies also raised the issue of a potential conflict of interest with respect to the designated recipient in large urbanized areas, and noted uncertainty about the ability of designated recipients to allow other organizations to conduct the competitive selection process. In its March interim guidance and proposed strategies, FTA noted that many comments on its November 2005 Notice of Program Changes expressed concern that a conflict of interest could exist in large urbanized areas when the designated recipient, specifically a provider of transportation services, conducts the competitive selection process and is eligible for funding. In addition, officials at 12 of the 17 agencies we contacted in large urbanized areas and 18 stakeholder comments on FTA’s March 2006 interim guidance and proposed strategies believed there would be a potential conflict of interest or the appearance of a conflict of interest in this arrangement. Eight other stakeholder comments stated that a transparent competitive selection process or the involvement of metropolitan planning organizations in the selection process would ameliorate any conflict-of-interest concerns. While officials we interviewed in 2 large urbanized areas raised the possibility of the designated recipients allowing another organization to conduct the competitive selection process to avoid potential conflict-of-interest issues, officials in 1 of these areas said that prior to the release of FTA’s proposed final guidance, they received inconsistent information from FTA staff regarding this issue. The ability of designated recipients to both conduct the competitive selection process and compete for funds through this process does present potential conflict-of-interest concerns. However, FTA outlined a number of strategies and controls in its JARC guidance that, if adhered to by designated recipients, should address many of these potential conflict-of- interest concerns and minimize perceptions of unfairness in the competitive selection process. These controls relate to GAO’s internal control standards for the federal government, one of which addresses the policies and procedures in place within an agency to ensure proper stewardship and accountability for government resources. These strategies and controls were as follows: Selection of the designated recipient. To address stakeholders’ concerns of a potential conflict of interest, FTA recommended in its March interim guidance and proposed strategies that the designated recipient not be a provider of transportation services. FTA noted that it received a wide range of comments on this proposal, and, in response, the September proposed final guidance stated that the designated recipient may be the same as the area’s existing Section 5307 program designated recipient or that another agency may be a preferred choice that is based on local circumstances. Strategies for a transparent competitive selection process. FTA’s interim guidance and proposed strategies and proposed final guidance advised that the designated recipient follow a simple and straightforward selection process that is transparent, and provided several potential strategies for areas to consider when implementing a competitive selection process. These strategies include ensuring greater inclusion at the onset of the coordinated planning process to alleviate concerns about a level playing field, and ranking projects using methods such as third-party review, peer review, or review by a panel of planning partners. Allowing other organizations to conduct the competitive selection process. While officials we interviewed in 1 large urbanized area said that FTA officials had previously provided conflicting information about the ability of designated recipients to allow other organizations to conduct the competitive selection process, FTA’s September proposed final guidance affirms that designated recipients can work with other organizations to conduct the competitive selection process to alleviate conflict-of-interest concerns. FTA’s proposed final guidance also notes that the SAFETEA-LU requirement for designated recipients to conduct the competitive selection process in cooperation with the metropolitan planning organization in large urbanized areas should mitigate this potential conflict-of-interest concern. FTA oversight of the competitive selection process. Once designated recipients select projects and submit applications to FTA for funding for project implementation, FTA officials reported that they will review the applications to ensure that areas used a competitive process to select projects. In addition, at the time of submitting an application for funding, designated recipients are required to certify that they distributed funds on a fair and equitable basis, and FTA has advised that a transparent and inclusive competitive selection process should serve as the basis for this certification. State and local officials we interviewed and stakeholder comments on FTA’s interim guidance and proposed strategies also cited several challenges and concerns related to the development of the coordinated public transit-human services transportation plans. These issues included participation in the planning process and the amount of time needed to develop coordinated plans. For example, officials in 3 large urbanized areas and 5 states we contacted noted challenges in getting other organizations, such as human service agencies, to participate in the planning process. One of these officials noted her concern that organizations that do not want to receive FTA funding will have no reason to participate in the planning process. In addition, five comments on FTA’s interim guidance and proposed strategies suggested that federal agencies that provide other sources of federal funds for transportation services should require their grantees to participate in coordinated planning efforts. FTA officials reported that they have been working with members of the Federal Interagency Coordinating Council on Access and Mobility to encourage federal grantees that receive other sources of human service transportation funding to participate in coordinated transportation planning. Although it will take time to put coordination provisions in place within each agency, FTA officials said they were encouraged by this progress. Program stakeholders also expressed concern about their ability to develop coordinated plans within FTA’s time frames. For JARC, the requirement to derive projects from a coordinated public transit-human services transportation plan was in place for fiscal year 2006 and applied to the New Freedom program and the Elderly Individuals and Individuals with Disabilities program beginning in fiscal year 2007. Seven stakeholder comments on FTA’s interim guidance and proposed strategies noted that it would be difficult to develop a plan within this time frame. Officials in 2 large urbanized areas we contacted shared similar concerns. In its proposed final guidance, FTA focuses on a phased-in approach to the development of coordinated plans through fiscal year 2007, with full implementation of the coordinated planning requirements for projects funded in fiscal year 2008. FTA officials also said they are encouraging areas to build on existing planning efforts to fulfill SAFETEA-LU requirements. State and local officials we interviewed cited fewer challenges related to the competitive selection of JARC projects. The reason could be because few state and local agencies we contacted had completed a competitive selection process, and many did not anticipate selecting projects until early- to mid-2007. However, officials in 1 large urbanized area and 1 state we contacted noted the difficulty in initiating a competitive selection process without additional FTA guidance. One of these officials said that they did not want to have to begin a new process if their actions contradict any future FTA guidance. Another state official with whom we spoke said that he would like FTA to clarify questions his agency had about the competitive selection process, such as what it means to certify that the state derived projects that were based on a coordinated plan. FTA’s proposed final guidance provided recipients with additional information on how to certify that they selected projects that were based on a coordinated plan. In addition to challenges related to the designated recipient, coordinated planning, and competitive selection of JARC projects, state and local officials we interviewed also cited implementation challenges related to funding and their communication with FTA. Officials in nearly half of the states and large urbanized areas we contacted did not believe that the 10 percent of an area’s JARC apportionment available for administration, planning, and technical assistance would be sufficient for these activities, although the reasons for these beliefs varied. For example, officials in 1 state and in 1 large urbanized area did not believe these funds would be sufficient because they will incur higher initial costs to meet the new program requirements, while officials in 2 other states and 1 large urbanized area did not believe this funding would be sufficient due to the costs of developing coordinated plans. FTA received a number of comments about funding for administration, planning, and technical assistance, and the September proposed final guidance informs recipients of other sources of FTA funding that are available for planning activities. These sources include funding from FTA’s Urbanized and Non-urbanized Area Formula programs as well as its Metropolitan and Statewide Planning programs. The proposed final guidance also proposes that recipients may combine the administrative funding available under the Elderly Individuals and Individuals with Disabilities (known as Section 5310), JARC, and New Freedom programs to develop a single coordinated public transit-human services transportation plan. In addition, the proposed final guidance notes that the 10 percent of an apportionment available for these activities is not specific to one year, and that recipients may roll over administrative funding into a subsequent year for the anticipated future costs of projects. Lastly, the proposed final guidance notes that planning activities are an eligible expense for the JARC program, beyond the 10 percent of an apportionment available for administration, planning, and technical assistance. Several officials we interviewed also cited challenges in meeting the JARC program matching requirements. Under SAFETEA-LU, grantees may use federal JARC funding for 80 percent of capital expenses and 50 percent of operating expenses. Matching funds may come from other federal programs that are not administered by DOT, such as the Temporary Assistance for Needy Families (TANF) block grant, as well as from noncash sources, such as in-kind contributions and volunteer services. One state official we interviewed, whose agency previously received JARC funding, noted that the agency had struggled in the past to secure matching funds and, as a result, has yet to spend all of its past federal JARC funding. A metropolitan planning organization official we interviewed noted that the ability of smaller nonprofit organizations in her area to secure the required matching funds was an issue, because these organizations have limited resources to use for matching funds. In addition, 1 state official and officials in 1 large urbanized area said that their areas anticipated or had seen cutbacks in matching funding they had received in the past from agencies that provided funding from programs such as TANF. As a result, these officials said they will have less state and local matching funding available for projects. Although the JARC matching requirements are set in the SAFETEA-LU legislation, FTA’s proposed final guidance provides information on potential sources of matching funds for JARC projects. While several officials we interviewed had positive comments about FTA’s efforts to solicit public input as it implements changes to JARC and other programs, some officials also noted challenges they had encountered in communicating with FTA regarding JARC implementation. Receiving consistent information from FTA was one challenge cited by officials we interviewed. As we previously noted, officials from one metropolitan planning organization reported that they received inconsistent information from different FTA staff in response to a question about the responsibilities of the designated recipient. In addition, officials we interviewed in 1 state said they received different answers regarding the timeline for completing a coordinated plan. Other officials we interviewed cited challenges in receiving information to answer implementation questions. Officials in 2 states we contacted noted difficulties in getting specific answers to their implementation questions, with 1 state official noting that with new programs, FTA should be prepared to answer specific questions about program implementation instead of providing general information. Although FTA revised its original JARC evaluation and oversight proposals to respond to current and past concerns raised by program stakeholders, gaps in monitoring may limit FTA’s ability to assess whether the program is meeting its goals. In previous work, we and others have reported that FTA could better measure and communicate the outcomes of the JARC program to program stakeholders, including Congress and JARC grantees. To address these issues, FTA sought public comment on four new performance measures—one specifically for JARC and three crosscutting measures—and an existing data collection mechanism to track JARC performance data, the National Transit Database (NTD). However, several program stakeholders noted various obstacles to collecting reliable data on FTA’s proposed measures, and some state and local officials we interviewed reported that it would be challenging to use the NTD system. In addition, state and local officials expressed ongoing concerns about the lack of feedback on their performance after submitting their data to FTA. In response to these concerns, FTA clarified the performance measures, introduced a plan to use its existing grant management system for collecting performance data, and proposed to be more explicit with grantees about how reported JARC performance data were being used. FTA officials also reported that they are testing the JARC performance measure and obtaining baseline data for use in the required evaluation of the JARC program, which will be submitted to Congress in August 2008. Even if FTA resolves its performance measurement and reporting issues, gaps in monitoring may continue to limit FTA’s ability to evaluate and oversee the JARC program. FTA plans to use existing oversight processes for monitoring JARC recipients; however, FTA officials also noted that SAFETEA-LU did not specifically provide project management oversight funds for the JARC program. As a result, FTA officials are looking for alternate sources of funding—such as the agency’s administrative funding—to provide program oversight for JARC. The need for agencies to measure performance is based upon the Government Performance and Results Act of 1993 (GPRA),which was intended to improve federal program effectiveness, accountability, and service delivery. GPRA helped create a governmentwide focus on results by establishing a statutory framework for performance management and accountability, with the necessary infrastructure to generate meaningful performance information. This act required federal agencies to develop strategic plans and annual performance plans, link them with outcome- oriented goals, and measure agency performance in achieving these goals. The Office of Management and Budget also plays a role in GPRA implementation and reviews agencies’ strategic plans, annual performance plans, and annual performance reports. Overall, GPRA’s requirements have laid a solid foundation for results-oriented agency planning, measurement, and reporting by providing more objective information on achieving goals and on the relative effectiveness and efficiency of federal programs and spending. Past GAO reports on performance measurement and performance budgeting have noted the importance of using outcome-oriented measures to assess the extent to which a program achieves its objectives on an ongoing basis and the importance of linking resources to results. However, our previous reviews of the JARC program have found that FTA lacked the data needed to evaluate and report on the program as required by Congress. For example, in May 1998, we recommended that FTA establish specific objectives, performance criteria, and measurable goals to assess how the JARC program would improve mobility for low-income workers. In response, FTA instituted an evaluation plan and selected access to employment sites as the sole measure of program success. However, we later found that this measure did not address key aspects of the program, such as increasing collaboration between grantees and stakeholders and establishing transportation-related services that help low- income individuals. We also reported in August 2004 that grantees found it difficult to obtain the data requested by FTA, such as the number of potential employers reached by JARC services. Furthermore, the grantee reports used to evaluate the JARC program contained self-reported information, which FTA did not verify. As a result, we stated that FTA’s 2003 evaluation of JARC was limited because it lacked consistent, generalizable, and complete information, thereby making it difficult to use these data to draw any definitive conclusions about the program as a whole. In recognition of these concerns, FTA began taking steps to consider ways to improve its evaluation process, such as revising the JARC performance measures. In previous reports on the JARC program, we and others have also highlighted issues with FTA’s reporting mechanism and lack of communication with grantees about their performance. Performance reporting is a critical element for establishing accountability and evaluating whether and to what extent program managers are meeting the goals contained within agency strategic and performance plans. In 2004, we reported that JARC grantees were required to report quarterly data using a database that many found to be burdensome. We also noted that specific information in FTA’s JARC evaluation may not have been consistent because grantees did not follow a standardized reporting system. Our past work on data quality has highlighted the importance of ensuring that reported performance data are sufficiently credible for decision making. In a 2003 FTA-contracted study of JARC evaluation efforts, some grantees recommended that FTA allow agencies to report performance data using existing systems, such as the NTD, and that the reporting structure be flexible enough to enter qualitative or narrative information to reflect the different types of services provided by JARC programs. Grantees also stated that they would be interested in receiving feedback from FTA on the JARC evaluation process and stressed the importance of communicating program findings to help them assess and improve their performance. We previously have identified the distribution of information in a form and time frame that allows managers, staff, and external stakeholders to perform their duties and to provide them with a basis for focusing their efforts and improving performance as a critical practice for managing program results. FTA’s extensive public participation process helped to inform changes made to the proposed final guidance issued in September, including the introduction of new performance measures to evaluate the JARC program as well as a different reporting mechanism for collecting data. In its March 2006 interim guidance and proposed strategies, FTA had proposed using one JARC-specific measure and three crosscutting measures to assess the JARC program’s outcomes and impacts: Cumulative number of jobs accessed (JARC-specific): Cumulative number of jobs reached through the provision of JARC-related services for low-income individuals and welfare recipients. Efficiency of operations (crosscutting measure): Number of communities and states reporting the use of shared resources between different agencies and organizations so they can provide more rides for people with disabilities, older adults, and individuals with lower incomes at the same or lower cost. Program effectiveness (crosscutting measure): Number of communities that have a simple point of entry-coordinated human service transportation system for people with disabilities, older adults, and individuals with lower incomes so they have easier access to transportation services. Customer satisfaction (crosscutting measure): Level of customer satisfaction reported in areas related to the availability, affordability, acceptability, and accessibility of transportation services for people with disabilities, older adults, and individuals with lower incomes. According to FTA, the JARC-specific measure was intended to reduce the numerous JARC data requirements, while the three crosscutting measures reflected SAFETEA-LU’s emphasis on the coordination of human services transportation and would apply to the JARC, New Freedom, and Elderly Individuals and Individuals with Disabilities (Section 5310) programs. In addition, FTA proposed to address past concerns regarding the burden of collecting program data on JARC by using existing mechanisms, including the NTD, which is used to track operational, service, and financial data on other transit formula programs. In both the docket comments and in our interviews, program stakeholders cited potential obstacles to collecting accurate data on the number of jobs accessed measure, such as a lack of guidance from FTA and limited resources (see table 3 for a summary of stakeholder concerns regarding the proposed measures). For example, 7 state and local officials we interviewed reported that FTA’s definition for the number of jobs accessed was unclear, or that they did not know how to determine this measure. Specifically, one metropolitan planning organization official wanted FTA to clarify whether the jobs accessed measure referred to the number of low- income people using a JARC-funded service to travel to their jobs or to the total number of jobs available in the area being served by a JARC-funded service. Three state transportation officials that we contacted were also concerned that they did not have sufficient staff to conduct the required data collection. Program stakeholders expressed similar concerns in their docket comments. For example, 2 stakeholders noted in their written comments to FTA that collecting data on the proposed performance measures may be overly burdensome for small agencies. FTA officials acknowledged that there was confusion among program stakeholders about the JARC-specific measure and how it should be measured, and they subsequently clarified the original proposal on the basis of the comments received. FTA’s proposed final guidance stated that the JARC-specific measure would assess the following: Job access: The increase in access to jobs related to geographic coverage and/or service times that impact the availability of transportation services for low-income individuals as a result of the JARC projects implemented in the current reporting year. Rides provided: The number of rides provided for low-income individuals as a result of the JARC projects implemented in the current reporting year. According to FTA, the jobs accessed measure is a measure of “system coverage,” describing the number of jobs reachable by JARC-funded services. FTA also clarified that the new measure is not a determination of an actual number of riders who are getting and going to jobs, which was a concern raised by some program stakeholders in their docket comments and in our interviews. FTA also intends to monitor JARC service use by measuring the number of rides actually provided by the JARC service annually. In addition to clarifying the JARC measure, FTA is also taking steps to test its JARC performance measure and to collect baseline data for its upcoming evaluation of the program. For example, FTA has hired a contractor to examine the feasibility of collecting data for the increase in the jobs accessed measure and is currently analyzing the strategies for capturing this more precise measure and testing its implementation. FTA also is soliciting public comments on the revised JARC performance measures, which will be used to formulate the final JARC guidance. Once the measures are finalized, FTA will test the JARC-specific performance measure and plans to obtain baseline data for fiscal year 2006 and beyond using JARC grants active during fiscal year 2005. FTA officials plan to use these data to conduct the required evaluation of the JARC program, which must be submitted to Congress in August 2008. FTA’s proposed final guidance states that it will conduct independent evaluations of the JARC program focused on specific data elements to better understand the implementation strategies and related outcomes associated with the program. This approach is supported by our recent report on grants management, in which we recommended that performance data should be tested to make sure they are credible, reliable, and valid. An FTA official we spoke with told us that FTA hopes to have a formal reporting methodology targeted to be in place by spring 2007. Program stakeholders also reported potential difficulties with FTA’s proposed crosscutting national coordination measures to assess program performance. (See table 3 for a summary of stakeholder concerns regarding the proposed performance measures.) For example, some program stakeholders stated in their docket comments that the performance measures would be too prescriptive and would stifle local creativity, while 13 stakeholders recommended that performance measures should be developed locally to address local conditions and needs. Specifically, two commenters noted that FTA’s proposed crosscutting performance measures did not necessarily acknowledge the differences in providing JARC services in urbanized areas compared with rural areas, where the number of transit providers may be limited and the routes typically serve fewer people at a higher cost. In addition, two local officials and one state department of transportation official that we interviewed reported that measuring customer satisfaction would likely require administering a survey, which could be expensive or labor-intensive. In recognition of these concerns, FTA did not include the three crosscutting coordination measures in its proposed final guidance, noting instead that individual communities will have the option to include evaluation strategies for their own activities. We have previously observed that designing results-oriented performance measures for intergovernmental programs, such as JARC, is complicated by the broad range of objectives identified for some programs and the discretion states and localities have in achieving those objectives. According to FTA, the crosscutting measures were created in response to recommendations stemming from the Interagency Coordinating Council on Access and Mobility’s United We Ride initiative to develop a national performance measure for coordination. After reviewing the comments, however, FTA officials that we interviewed told us that they realized the difficulty of devising national measures and determined that measuring coordination should be done at the local level. FTA also clarified that the intent of the crosscutting measures was to capture a national picture of JARC-funded services, rather than compare individual communities or service systems. However, FTA officials reported that they will encourage grantees to develop additional measures for evaluating whether their programs are meeting their intended state or local goals. This proposal is supported by our past work, in which we reported that performance measures should tell each organizational level how well it is achieving its goals. In addition, the United We Ride initiative is developing a tool and plans to provide technical assistance to assist with these efforts in the future. Program stakeholders expressed mixed opinions about FTA’s proposal to use the NTD to streamline data collection. For example, 5 state and local agencies that we interviewed were generally positive about FTA’s proposal to use the NTD for JARC reporting, in part because they were familiar with using this system to collect and report data on other FTA programs. However, 5 agencies that we interviewed told us that small and rural agencies may find it difficult to use the NTD for collecting and managing data. In addition, two agency officials we interviewed reported that NTD can be cumbersome to use, while two program stakeholders noted in their docket comments that smaller agencies may need staff training to use the NTD. Due in part to the comments received, FTA decided not to use the NTD for JARC reporting. FTA told us that while the NTD is in place, it is currently not set up or designed to collect the qualitative measures that are important for understanding the trends related to human service transportation. FTA proposed that JARC grantees report their data as an attachment to their annual report submissions in the Transportation Electronic Award and Management (TEAM) system, which the agency uses to manage and track its grants. One FTA official told us that TEAM would be better suited for collecting JARC data because it can track qualitative information, and that JARC grantees that receive funding through other FTA programs would be familiar with how to collect and report data using TEAM. Finally, state and local officials that we interviewed also expressed ongoing concerns about the lack of feedback on their JARC performance after they report data to FTA, which may limit their ability to manage program performance. For example, 19 of the 23 states and large urbanized areas that had received JARC grants in the past commented that FTA had not provided them with any feedback on their performance data after it was submitted. Three state and local officials also told us that they would like to know how the performance data they report is being used by FTA. Meanwhile, two state transportation officials and two local officials said that receiving feedback from FTA would be helpful to know how they are performing and to make improvements or corrections. Previous reports by GAO and others have found that providing frequent and effective feedback on performance information can enhance its use for decision making. According to FTA, the JARC data collected to date have not been intended to be used to evaluate individual projects, but rather were geared toward assessing how the program was achieving goals nationally, as required by GPRA and the Office of Management and Budget. However, during a recent interview with FTA officials, they said that they would be more explicit with grantees about how they are using JARC performance data, and that they are open to exploring the possibility of posting this information on the FTA Web site in the future. Even if FTA resolves its performance measurement and reporting issues, gaps in its plan for monitoring JARC recipients may continue to limit FTA’s ability to evaluate and oversee the program. While FTA has proposed using existing oversight processes to monitor JARC recipients, these oversight processes do not explicitly include provisions for oversight of the JARC program. Furthermore, FTA’s proposed process for oversight of agencies that do not fall under existing processes could lead to inconsistent oversight of JARC recipients. FTA does not have a complete plan for oversight of the JARC program. Monitoring of policies and procedures to ensure proper stewardship of government resources is an important aspect of internal control. FTA is responsible for ensuring that grantees follow federal mandates along with statutory and administrative requirements. In its March interim guidance and proposed strategies, FTA stated that it would monitor implementation of JARC and other programs using pre- and post-award review processes used for grant applications and grant management, including self- certifications, progress reports, and site visits. FTA’s proposed final guidance states that FTA will also use existing oversight processes for other FTA programs to conduct JARC oversight. These processes are as follows: State Management Reviews: These reviews assess states’ implementation and management of the Elderly Individuals and Individuals with Disabilities program (Section 5310) and the Nonurbanized Area Formula program (Section 5311). Triennial Reviews: These reviews assess grantees receiving Urbanized Area Formula program (Section 5307) grants. These grantees are primarily transit agencies and some metropolitan planning organizations. FTA has proposed using these processes—which FTA uses for oversight of programs that award funding to states, transit agencies, and metropolitan planning organizations—to oversee the JARC program because they should cover most JARC designated recipients. FTA’s proposed final guidance also notes that JARC designated recipients that are not a state or a Section 5307 recipient may be subjected to periodic spot reviews of their administration of the program. However, two issues with FTA’s monitoring proposal may result in gaps in its oversight of the JARC program. First, the use of periodic spot reviews of designated recipients that are not states or Section 5307 recipients may result in inconsistent monitoring of JARC recipients. For example, while some metropolitan planning organizations that serve as JARC designated recipients also receive Section 5307 funding and will be subject to FTA oversight through its triennial review process, other metropolitan planning organizations serving as JARC designated recipients do not receive Section 5307 funding, and will be subject to FTA oversight through its proposed periodic spot reviews. It is not clear from FTA’s proposed final guidance if these periodic reviews will be more or less frequent than the 3-year cycle of FTA’s triennial reviews and state management reviews. As a result, JARC designated recipients may be held to different oversight standards on the basis of what other types of FTA funding they receive. Second, FTA’s existing oversight processes currently do not include provisions for JARC program oversight. For example, FTA’s State Management Review guidance, which contains information on the Section 5311 program and the Elderly Individuals and Individuals with Disabilities program, does not include JARC program requirements and information, such as the requirement to distribute funds on a fair and equitable basis. We previously noted that this requirement would be important for recipients to adhere to in order to address potential conflict-of-interest concerns. While FTA officials said that they have begun to work to incorporate JARC into their existing oversight processes, they noted that SAFETEA-LU omitted JARC from the list of programs for which FTA may specifically use appropriated funds to obtain contractual support for project management oversight and review of major capital projects. They are presently researching other sources of funding—such as the agency’s general administrative funding—that can be used to ask detailed programmatic questions of JARC recipients and to conduct site visits and project reviews. FTA officials also said that they currently do not know how much of a problem this will pose, because they do not yet know which entities will be the designated recipients for most of the areas receiving JARC funds. As a result, they are uncertain of how many JARC designated recipients will already be covered by existing oversight processes because they receive funds for other FTA programs, such as Section 5307. Given this issue, FTA officials said that they were still determining the frequency and level of JARC oversight that could be supported with their current resources. Until it develops a complete plan for implementing and funding JARC oversight, FTA’s key oversight processes will not provide assurance that recipients are meeting program requirements. FTA has made progress in implementing changes to the JARC program, gathering extensive public input to develop program guidance for states and large urbanized areas. However, FTA lacks an important element of program accountability and performance measurement for the JARC program, specifically related to monitoring. FTA officials have proposed to use the agency’s oversight mechanisms for other FTA programs for JARC monitoring, but acknowledged that they have not finalized how this will work. Without the inclusion of JARC program requirements—such as the fair and equitable distribution of funding—in these existing oversight processes, FTA will have limited assurances that JARC recipients are administering the program in accordance with FTA’s requirements and are meeting program objectives. In addition, FTA has proposed an alternative oversight process for recipients that are not covered by its existing Triennial Reviews and State Management Reviews, but FTA has not specified how often these recipients will be subject to its oversight, which may result in inconsistent or infrequent oversight of JARC recipients. To establish adequate and consistent oversight processes that will enable FTA to evaluate and oversee JARC projects and determine whether they are meeting JARC program goals, we recommend that the Secretary of Transportation direct the Administrator, FTA, to take the following two actions: Develop a plan for including the JARC program in Triennial Reviews and State Management Reviews, and update monitoring guidance and information accordingly. Specify in the JARC final guidance how frequently FTA will perform spot reviews of designated recipients that are not subject to FTA’s Triennial Reviews and State Management Reviews, and make the interval for conducting spot reviews consistent with the 3-year cycles for Triennial Reviews and State Management Reviews, or more frequently if FTA determines it necessary. We provided a draft of this report to the Department of Transportation for review and comment. Officials from the department and FTA generally agreed with the report’s findings and said that they would consider the recommendations as they move forward in implementing the JARC program. Although FTA officials recognized the need for program oversight and indicated that they are already taking steps to incorporate the JARC program into their existing review processes, they reiterated their concerns that SAFETEA-LU did not provide them with a specific source of oversight funding for the JARC program. As a result, they are seeking other sources of funding—such as the agency’s general administrative funds—to carry out this activity. Finally, FTA officials provided technical clarifications, which we incorporated in the report as appropriate. We are sending copies of this report to congressional committees with responsibility for transit issues; the Secretary of Transportation; the Administrator, Federal Transit Administration; and the Director, Office of Management and Budget. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions regarding this report, please contact me on (202) 512-2834 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. This report addresses the following four objectives: (1) changes that were made to the Job Access and Reverse Commute (JARC) program as a result of the Safe, Accountable, Flexible, Efficient Transportation Equity Act – A Legacy for Users (SAFETEA-LU); (2) progress that the Federal Transit Administration (FTA) has made in implementing these changes; (3) the extent to which states and large urbanized areas have implemented changes to the JARC program, and challenges they have encountered; and (4) whether FTA’s proposed strategy for evaluating and overseeing the JARC program will allow the agency to assess the extent to which the program is meeting its stated goals. To identify the changes that SAFETEA-LU made to the JARC program, we reviewed the provisions of SAFETEA-LU and of its predecessor, the Transportation Equity Act for the 21st Century (TEA-21), dealing with the JARC program. We also reviewed previous GAO reports on JARC and interviewed officials from FTA’s headquarters and one regional office. To summarize financial information for JARC for fiscal years 1999 through 2009, we gathered and analyzed data from FTA’s Web site and agency officials on dollar amounts authorized, appropriated, rescinded, obligated, and unobligated. To assess the reliability of these data, we interviewed FTA officials about FTA’s policies and procedures for data collection and verification. Specifically, we asked them whether their policies and procedures had changed significantly since we reviewed them for our 2004 report on JARC. FTA officials told us that there were no significant changes in their data collection and verification procedures for JARC information. We also compared these data with data published in the Federal Register and data on FTA’s Web site for obvious errors in completeness and accuracy. Therefore, we determined that the FTA information presented was sufficiently reliable for the purposes of this report. To describe the progress FTA has made in implementing changes to JARC, we interviewed FTA officials and officials from industry associations, including the American Association of State Highway and Transportation Officials, the American Public Transportation Association, the Association of Metropolitan Planning Organizations, the Community Transportation Association of America, and the National Association of Regional Councils, to obtain their views on FTA’s progress in implementing the program changes. We also reviewed FTA’s JARC interim program guidance for fiscal year 2006 and proposed strategies for fiscal year 2007 (issued in March 2006), and its proposed final guidance for fiscal year 2007 (issued in September 2006). To describe the extent to which states and large urbanized areas have implemented changes to the JARC program and any challenges they have encountered in doing so, we obtained data from FTA officials on the number of states and large urbanized areas that had officially designated a recipient for JARC funds, selected projects and applied for funding, and obligated funds. To determine whether FTA’s proposed strategy for evaluating and overseeing the JARC program will allow the agency to assess whether the program is meeting its stated goals, we interviewed FTA officials about their performance measurement and evaluation plans. We reviewed FTA’s earlier JARC program evaluation, Job Access and Reverse Commute Program: Report to Congress (May 2003). We also reviewed relevant legislation, FTA program guidance, Office of Management and Budget circulars and guidance on performance measurement, prior GAO JARC reports, and GAO reports and guidance on performance measurement and program evaluation. We did not evaluate FTA’s proposed performance measures, because those measures were too preliminary at the time of our review to allow meaningful comparison with our criteria for successful performance measures. In addition, FTA had recently hired a contractor to evaluate the feasibility of collecting data for one of the proposed measures. To address the third and fourth objectives, we also designed and conducted semistructured telephone interviews with officials from 24 of the 209 states and large urbanized areas that were apportioned fiscal year 2006 JARC funds. The interviews were designed to gain state and local officials’ perspectives on a number of topics, including the effect of changing from a discretionary program to a formula-based program on JARC services in their area; the process of selecting a designated recipient, developing a coordinated public transit-human services transportation plan, and conducting a competitive selection process for JARC projects; FTA’s proposed performance measures and program oversight mechanisms for JARC; and any challenges they may have encountered in implementing changes to the JARC program. After conducting the interviews with all 24 states and large urbanized areas, we used a content analysis to systematically determine the state and local officials’ views on key interview questions and identify common themes in their responses. Two analysts reached consensus on the coding of the responses, and a third reviewer was consulted in case of disagreements, to ensure that the codes were reliable. The interviews included officials from the departments of transportation of 12 states and from 8 metropolitan planning organizations and 9 transportation agencies from 12 large urbanized areas. We conducted the interviews in June, July, and August 2006. We selected the 12 states to obtain diversity in a range of criteria, as follows: Change in JARC funding: Analyzed the percentage change and selected 4 states that received an increase in their federal JARC funds from fiscal years 2005 to 2006, 5 states whose JARC funds decreased from 2005 to 2006, 1 state that received approximately the same amount of funding in fiscal years 2005 and 2006, and 2 states that did not receive JARC funds in 2005. Comments: Whether a state department of transportation had submitted comments to the Department of Transportation’s (DOT) online docket on FTA’s interim JARC program guidance for fiscal year 2006 and proposed strategies for fiscal year 2007. Statewide program: Whether a state was identified in FTA’s fiscal year 2005 grant apportionment notice as having a statewide JARC program, which meant that the state likely had previous experience in administering JARC funds. Designated recipient: Whether a state had notified FTA of its designated recipient (as of June 2006) for the JARC funds, from which we inferred that a state had taken some action to implement the JARC program. Planning funds: Whether a state had applied to FTA for 10 percent of its apportionment for planning/administration/technical assistance, as allowed by statute, from which we inferred that a state had taken some action to implement the JARC program. Recommendations: Referral by FTA or industry associations. Table 4 lists the 12 states that we selected on the basis of these criteria. To obtain the perspectives of small urbanized areas and rural areas that had previously received JARC grants directly and would now have to apply to the state designated recipient for funding, we supplemented the state interviews with interviews with officials from a transportation agency in Galveston, Texas—a small urbanized area—and from a nonprofit agency in Stigler, Oklahoma, that provides transportation in rural areas of the state. We selected 12 large urbanized areas to obtain diversity in a range of criteria, as follows: Prior receipt of JARC funding: Whether a large urbanized area had received a JARC grant prior to fiscal year 2006. Receipt of fiscal year 2006 funding: Whether a large urbanized area had successfully applied to FTA for its fiscal year 2006 JARC funding (as of July 2006). Comments: Whether a metropolitan planning organization or local transportation agency in a large urbanized area had submitted comments to the DOT’s online docket on FTA’s interim JARC program guidance for fiscal year 2006 and proposed strategies for fiscal year 2007. Designated recipient: Whether a large urbanized area had notified FTA of its designated recipient (as of July 2006) for the JARC funds, from which we inferred that the area had taken some action to implement the JARC program. Recommendations: Referral by FTA or industry associations. Population: Whether a large urbanized area had a population over 1 million. Multistate area: Whether the large urbanized area covers multiple states, which we assumed could present unique issues for an area in implementing the JARC program. Location: Whether the large urbanized area was in a state that we had already selected for interviews. Table 5 lists the 12 large urbanized areas we selected on the basis of these criteria, and the agencies that we interviewed. It is important to note that these interviews cannot be generalized to the entire JARC recipient population because they were selected from a nonprobability sample. We supplemented the information obtained from these semistructured interviews by analyzing the more than 200 public comments submitted to DOT’s online docket regarding FTA’s interim program guidance for fiscal year 2006 and proposed guidance for fiscal year 2007. We used a content analysis to systematically identify common themes in the comments submitted. Two analysts reached consensus on the coding of the responses, and a third reviewer was consulted in case of disagreements, to ensure that the codes were reliable. In summarizing the comments for appendix III, we only included comments that were made by more than one entity. We conducted our work from May through October 2006 in accordance with generally accepted government auditing standards. In its March 15, 2006, interim guidance and proposed strategies, FTA proposed several changes that would affect the operation of the JARC program. FTA allowed for a 30-day comment period, and after a request for an extension, the agency allowed approximately 1 month for comments. FTA received over 200 comments, and program stakeholders that commented included the following: state transportation agencies, trade associations, metropolitan planning organizations, public transit providers, private transit providers, individuals, and advocates. Table 11 summarizes FTA’s proposed changes to the coordinated planning process, the designated recipient and competitive selection process, and the performance measurement and reporting requirements. Other key contributors to this report were John Finedore (Assistant Director), Vidhya Ananthakrishnan, Lauren Heft, Foster Kerrison, Jessica Lucas-Judy, Nancy Lueke, Kimanh Nguyen, and Stan Stenersen. Public Transportation: Preliminary Information on FTA’s Implementation of SAFETEA-LU Changes. GAO-06-910T. Washington, D.C.: June 27, 2006. Transportation-Disadvantaged Seniors: Efforts to Enhance Senior Mobility Could Benefit from Additional Guidance and Information. GAO-04-971. Washington, D.C.: August 30, 2004. Job Access and Reverse Commute: Program Status and Potential Effects of Proposed Legislative Changes. GAO-04-934R. Washington, D.C.: August 20, 2004. Transportation-Disadvantaged Populations: Federal Agencies Are Taking Steps to Assist States and Local Agencies in Coordinating Transportation Services. GAO-04-420R. Washington, D.C.: February 24, 2004. Transportation-Disadvantaged Populations: Some Coordination Efforts Among Programs Providing Transportation Services, but Obstacles Persist. GAO-03-697. Washington, D.C.: June 30, 2003. Welfare Reform: Job Access Program Improves Local Service Coordination, but Evaluation Should Be Completed. GAO-03-204. Washington, D.C.: December 6, 2002. Welfare Reform: DOT Has Made Progress in Implementing the Job Access Program but Has Not Evaluated the Impact. GAO-02-640T. Washington, D.C.: April 17, 2002. Welfare Reform: Competitive Grant Selection Requirement for DOT’s Job Access Program Was Not Followed. GAO-02-213. Washington, D.C.: December 7, 2001. Welfare Reform: GAO’s Recent and Ongoing Work on DOT’s Access to Jobs Program. GAO-01-996R. Washington, D.C.: August 17, 2001. Welfare Reform: DOT Is Making Progress in Implementing the Job Access Program. GAO-01-133. Washington, D.C.: December 4, 2000. Welfare Reform: Implementing DOT’s Access to Jobs Program in Its First Year. GAO/RCED-00-14. Washington, D.C.: November 26, 1999.
Begun in 1998, the Job Access and Reverse Commute (JARC) program provides grants to states and localities for improving the mobility of low-income persons seeking work. The Federal Transit Administration (FTA) administers this program. In 2005, the Safe, Accountable, Flexible, Efficient Transportation Equity Act--A Legacy for Users (SAFETEA-LU) authorized $727 million for JARC for fiscal years 2005 through 2009, changed how these funds were to be awarded after fiscal year 2005, and required FTA to evaluate the program by August 2008. GAO examined (1) SAFETEA-LU's changes to JARC, (2) FTA's progress in implementing these changes, (3) states' and localities' efforts to respond and challenges they have encountered, and (4) FTA's proposed strategy for evaluation and oversight. GAO's work included analyzing program guidance as well as interviewing officials from FTA, industry groups, and more than 30 state and local agencies. SAFETEA-LU created a formula for distributing JARC funds starting in fiscal year 2006, substantially altering funding allocations provided under earlier grants. Funding in some states increased, with 2 states receiving increases of more than 1,200 percent between fiscal years 2005 and 2006. Funding in other states decreased as much as 80 percent, while 18 other states received funds that had not received them in fiscal year 2005. To receive funds, SAFETEA-LU required that states and localities designate a recipient agency to administer JARC funds, award grants on a competitive basis, and certify that projects were derived from a coordinated public transit-human services transportation plan. In March 2006, FTA issued interim guidance and proposed strategies for implementing these new requirements, but delays in issuing final guidance have reduced the window of opportunity for states and localities to obligate fiscal year 2006 funding. As required by SAFETEA-LU, FTA requested public comment on its interim guidance and proposed strategies, and responding to the more than 200 comments took more time than FTA had initially planned. FTA has specified in its guidance that states and localities have until the end of fiscal year 2008 to obligate fiscal year 2006 funds, so their ability to use the funds is not imminently jeopardized. FTA also encouraged states and localities to implement their programs on the basis of the interim guidance. However, given that officials in a number of areas we interviewed planned to wait for final guidance before moving forward, these areas will have less time available to obligate fiscal year 2006 funds. Most states and localities are in the process of trying to meet these new requirements, and although they have encountered challenges in doing so, FTA is taking steps to alleviate most of these challenges. As of the end of fiscal year 2006, about 4 percent of fiscal year 2006 funding apportioned to states and localities had been obligated. States and localities have raised a number of questions or concerns about the new requirements, such as whether an agency serving as the designated recipient would also be eligible to receive funds. In response, FTA proposed several actions that localities could take to reduce the potential conflict of interest in such situations. FTA is continuing to develop and refine its strategies for evaluation and oversight of JARC. FTA, which has had difficulty assessing this program in the past, proposed a new approach, but states and localities found problems with it. FTA is revising its approach and gathering baseline data for its required evaluation of the JARC program. Even if FTA resolves the concerns that have been raised, gaps in monitoring may still limit its ability to evaluate and oversee the program. FTA plans to use existing oversight processes for monitoring JARC recipients; however, FTA officials noted that SAFETEA-LU did not provide specific program management oversight funds for the JARC program and said that they are looking for alternate sources of funding.
The Dodd-Frank Act transferred consumer protection oversight and other authorities over certain consumer financial protection laws from multiple federal regulators to CFPB, creating a single federal entity to, among other things, ensure consistent enforcement of federal consumer financial laws. The Dodd-Frank Act charged CFPB with the following responsibilities, among others: ensuring that consumers are provided with timely and understandable information to make responsible decisions about financial transactions; ensuring that consumers are protected from unfair, deceptive, or abusive acts and practices, and from discrimination; monitoring compliance with federal consumer financial law and taking appropriate enforcement action to address violations; identifying and addressing outdated, unnecessary, or unduly burdensome regulations; ensuring that federal consumer financial law is enforced consistently, without regard to the status of a person as a depository institution, in order to promote fair competition; ensuring that markets for consumer financial products and services operate transparently and efficiently to facilitate access and innovation; and conducting financial education programs. Furthermore, the Dodd-Frank Act gave CFPB supervisory authority over certain nondepository institutions, including certain kinds of mortgage market participants, private student lenders, and payday loan lenders.Such institutions generally lacked federal oversight before the financial crisis of 2007-2009. The Dodd-Frank Act grants CFPB certain authorities that govern its collection of consumer financial data. The act also includes certain restrictions on CFPB’s collection and use of personally identifiable financial information and requirements to ensure that CFPB protects such data. The primary authorities and related restrictions we examined are included in three sections of the act: Market monitoring. Under section 1022(c), CFPB is directed to monitor for risks to consumers in the offering or provision of consumer financial products or services, including developments in consumer financial markets for such products or services, in order to support its rulemaking and other functions. The act provides CFPB with the authority, in conducting such monitoring, to gather information from time to time regarding the organization, business conduct, markets, and activities of covered persons and service providers, from a variety of sources, including several sources specified in the act. Under this data collection authority, CFPB is prohibited from obtaining records from covered persons and service providers participating in consumer financial services markets for the purposes of gathering or analyzing the personally identifiable financial information of consumers. Supervision of nondepository covered persons. Section 1024 provides CFPB with the authority to supervise entities (other than depository institutions or insured credit unions) that provide certain consumer financial products or services. This authority also extends to service providers.these entities comply with federal consumer financial laws and obtaining information about their activities and compliance systems or procedures, this section charges CFPB with requiring reports and conducting examinations of the nondepository persons the section covers for purposes of detecting and assessing associated risks to consumers and markets for consumer financial products and In addition to assessing the extent to which services.CFPB’s ability to collect personally identifiable financial information. Section 1024 does not contain any explicit restrictions on Supervision of large institutions and affiliates. Section 1025 of the Dodd-Frank Act provides CFPB with supervisory authority over insured depository institutions and credit unions with assets of more than $10 billion and their affiliates, including the authority to collect information from them for purposes of detecting and assessing associated risks to consumers and to markets for consumer financial products and services. CFPB also has some supervisory authority under section 1025 over service providers of insured depository institutions and credit unions with over $10 billion in assets, as well as service providers to a substantial number of insured depository institutions or credit unions with $10 billion or less in assets. Section 1025 does not contain any explicit restrictions on CFPB’s ability to collect personally identifiable financial information. The Dodd-Frank Act also contains additional restrictions on CFPB’s ability to collect consumer financial data and includes requirements on how such data must be protected once they are collected. The act requires CFPB to take steps to ensure that certain information, including personal information, is not disclosed to the public when such information is protected by law. In addition, CFPB must not obtain personally identifiable financial information about consumers from the financial records of covered persons or service providers, unless consumers provide written permission, or other legal provisions specifically permit or require such collections. CFPB interacts with other financial regulators that also collect consumer financial data and have responsibility for overseeing federal consumer financial laws. These agencies include the four prudential regulators that supervise depository institutions for safety and soundness of their financial condition: OCC charters and supervises national banks and federal thrifts; the Federal Reserve supervises state-chartered banks that opt to be members of the Federal Reserve System, bank holding companies, thrift holding companies, the nondepository institution subsidiaries of those institutions, and nonbanks designated as significantly important by the Financial Stability Oversight Council; FDIC supervises FDIC-insured state-chartered banks that are not members of the Federal Reserve System and federally insured state savings banks and thrifts; insures the deposits of all banks and thrifts approved for federal deposit insurance; and resolves by sale or liquidation all failed insured banks and thrifts and certain nonbank financial companies; and NCUA charters and supervises federally chartered credit unions and insures savings in federally and most state-chartered credit unions. As part of their overall supervision programs, the prudential regulators have consumer compliance examination authority for insured depository institutions with $10 billion or less in assets and CFPB is required to coordinate its supervisory activities with the supervisory activities of the prudential regulators for insured depository institutions with more than $10 billion in assets. Most of the depository institutions CFPB supervises for consumer protection are supervised for safety and soundness by OCC, the Federal Reserve, or FDIC and at a holding company level by the Federal Reserve. The Dodd-Frank Act requires CFPB to coordinate its supervisory actions and examinations of large depository institutions with the prudential regulators. Various other federal requirements apply to CFPB and other federal agencies’ data collection activities. The Paperwork Reduction Act (PRA) requires agencies to obtain OMB approval for identical collections of information from 10 or more individuals or entities. For data collections meeting the criteria of the act, agencies must seek public comment in the Federal Register and consult with the public and affected agencies on ways to minimize the burden associated with information collections and other issues. The general purposes of PRA include minimizing the federal paperwork burden for individuals, small businesses, state and local governments, and other persons; minimizing the cost to the federal government of collecting, maintaining, using, and disseminating information; and maximizing the usefulness of information collected by the federal government. The Office of Information and Regulatory Affairs within OMB provides oversight over federal data collections and PRA compliance. Privacy Act of 1974, Pub. L. No. 93-579, 88 Stat. 1896 (1974) (codified as amended at 5 U.S.C. § 552a); E-Government Act of 2002, Pub. L. No. 107-347, 116 Stat. 2899 (2002). safeguards to ensure the security and confidentiality of records and to protect against any anticipated threats or hazards to their security or integrity that could result in substantial harm, embarrassment, inconvenience, or unfairness to any individual on whom information is maintained. The Privacy Act also requires agencies to notify the public in the Federal Register when they establish or make changes to a system of records. Among the things this notice must identify are: the categories of data collected; the categories of individuals about whom information is collected; the intended “routine” uses of data; and procedures that individuals can use to review and correct information about them. The privacy provisions of the E-Government Act of 2002 require that agencies conduct privacy impact assessments before developing, using, or contracting for an information security system that contains personal information. These assessments are analyses of how personal information is collected, stored, shared, and managed in a federal system. According to OMB guidance, the purpose of such assessments is to (1) ensure handling conforms to applicable legal, regulatory, and policy requirements regarding privacy; (2) determine the risks and effects of collecting, maintaining, and disseminating information in identifiable form in an electronic information system; and (3) examine and evaluate protections and alternative processes for handling information to mitigate potential privacy risks. Title III of the E-Government Act, known as the Federal Information Security Management Act of 2002 (FISMA), established a framework designed to ensure the effectiveness of security controls of information and information systems that support federal operations and assets. This includes the information and information systems that are provided or managed by another agency, contractor, or other source (known as third- party providers). FISMA assigns specific responsibilities to the head of an agency to provide information security protections commensurate with the risk and magnitude of the harm resulting from unauthorized access, use, disclosure, disruption, modification, or destruction of information collected or maintained by or on behalf of the agency. FISMA also states that agencies are to develop, document, and implement an agency-wide information security program. The information security program should include periodic assessments of the risk and magnitude of harm that could result from the unauthorized access, use, disclosure, disruption, modification, or destruction of information or information systems; policies and procedures that (1) are based on risk assessments, (2) cost-effectively reduce information security risks to an acceptable level, (3) ensure that information security is addressed throughout the life-cycle of each system, and (4) ensure compliance with applicable requirements; subordinate plans for providing adequate information security for networks, facilities, and systems or groups of information systems, as appropriate; security awareness training to inform personnel of information security risks and of their responsibilities in implementing agency policies and procedures, as well as training personnel with significant security responsibilities for information security; periodic testing and evaluation of the effectiveness of information security policies, procedures, and practices, to be performed with a frequency depending on risk, but no less than annually—including testing of management, operational, and technical controls for every system identified in the agency’s required inventory of major information systems; a process for planning, implementing, evaluating, and documenting remedial action to address any deficiencies in the information security policies, procedures, and practices of the agency; procedures for detecting, reporting, and responding to security plans and procedures to ensure continuity of operations for information systems that support the operations and assets of the agency. To assist agencies in meeting the requirements of FISMA, NIST was tasked with developing standards and guidelines for agencies. NIST has issued a series of special publications addressing privacy and security concerns both at organizational and information system levels that federal agencies generally follow. Security and Privacy Controls: NIST Special Publication 800-53 gives agencies guidance on selecting and specifying security and privacy controls to meet federal standards and requirements. According to NIST, the guidance provides a holistic approach to information security and risk management by providing organizations with the breadth and depth of security controls necessary to fundamentally strengthen their information systems and the environments in which those systems operate. The guidance also organizes privacy controls into eight areas: authority and purpose; accountability, audit, and risk management; data quality and integrity; data minimization and retention; individual participation and redress; security; transparency; and use limitation. These controls are based on the Fair Information Practice Principles, an internationally recognized privacy framework. Protecting Personal Information: NIST Special Publication 800-122 provides guidelines for agencies to use in developing a risk-based approach for protecting personal information. NIST recommends that agencies evaluate how easily information can be used to identify specific individuals and evaluate the sensitivity of each individual data field, as well as the sensitivity of the collective data fields. Information Security Risk Management Framework: NIST Special Publication 800-37 describes a security risk-management framework for use by federal agencies and their contractors. This framework is a six-step process that helps agencies integrate information security and risk-management activities into the system development life- cycle. When CFPB began operations in 2011, it relied on the information security program and systems of the U.S. Department of the Treasury (Treasury). As the agency has grown, CFPB has begun transferring its information infrastructure (including e-mails, file shares, and data storage) to an independent hardware and systems environment owned by CFPB, but at the time of our review, some of CFPB’s data were still being transmitted using Treasury systems and CFPB was still using Treasury to manage its workstations. CFPB created a Data Intake Group consisting of CFPB staff from across the agency with expertise in legal, cybersecurity, and privacy issues. CFPB staff told us the group was formed in spring 2013 and has evolved into a standard business practice. The group regularly meets to discuss proposed data collections and to help ensure the agency takes all steps required under applicable law or guidance. CFPB staff said the group provides a forum for staff in various parts of the agency to raise issues relevant to their areas of expertise. For example, staff with legal expertise are expected to ensure appropriate use of collection authorities and compliance with any legal restrictions for a proposed data collection and staff with PRA expertise ensure that the group considers whether PRA might apply to the collection and whether to consult with OMB. The group’s collective decision to proceed with a data collection is summarized in an e-mail to the Chief Information Officer, who makes the final determination about the proposed collection. CFPB staff who are involved in coordinating the Data Intake Group have begun compiling information about each approved data collection, although this effort is still at an early stage. From January 2012 to July 2014, CFPB undertook 12 large-scale data collection efforts. These collections spanned products including mortgages, student loans, and credit cards, and have been used for a variety of purposes, such as informing rulemaking and statutorily required studies. CFPB obtains data for five of these collections on an ongoing basis; data for the other collections were obtained only once. The types of information in each consumer financial data collection vary depending on the product type and nature of the inquiry, and may include some account-level data (such as account balance and amount of available credit), transaction-level information (such as the timing of deposits or withdrawals in checking accounts, or merchant names for some transactions), or disclosures of product policies and terms. Some collections represent a sample of accounts from one source while others represent all data from selected institutions. The data come from a variety of sources, including financial institutions, credit reporting agencies, data aggregators, and industry groups. Table 1 provides more information on these consumer financial data collections. As noted in table 1, CFPB’s credit card and online payday collections include data from account holders’ credit reports. For each of these collections, CFPB requests that consumers’ account-level credit card or loan information is matched with their credit reports from the credit reporting agency. The credit reporting agency sends the combined data, which does not identify individual consumers, to CFPB through the commercial data aggregator. Aside from these two data collections, CFPB staff told us that large-scale collections are not aggregated or combined into larger databases. CFPB staff told us that most of CFPB’s large-scale data collections were conducted under its supervisory authorities. These authorities require CFPB to periodically require reports and conduct examinations of entities they oversee to assess compliance with federal consumer financial laws, obtain information about those entities’ activities, and detect and assess risks to consumers and markets for consumer financial products and services. CFPB staff noted that financial institution representatives generally requested that CFPB collect data under its supervisory authority to provide the institutions with greater confidentiality and legal protections. CFPB staff stated that data collected under CFPB’s supervisory authorities are considered confidential and therefore not subject to disclosure under certain federal information transparency requirements, such as the Freedom of Information Act. CFPB has used its supervisory authorities to collect certain data on credit cards, storefront payday loans, deposit advance products, and overdraft fees. Information collected under these authorities sometimes includes personally identifiable financial information. CFPB staff told us they need to collect and review consumer financial data at the institution level to effectively carry out their supervisory authorities. For example, they told us that they have used the data obtained on credit cards to identify risks and areas to be reviewed during examinations of financial institutions. According to CFPB staff, these analyses can identify changes at a particular institution, such as an increase in late fees charged or allow comparisons that identify divergences in practices across institutions and help CFPB determine where to allocate its supervisory resources. CFPB staff also noted that certain large-scale data collections facilitate a supervisory approach based on determining the relative risk consumer financial products and services posed to consumers in the relevant product and market. CFPB staff noted that this supervisory approach differs from the approaches the prudential regulators have taken. Moreover, CFPB legal staff said use of consumer financial data collected under the agency’s supervisory authorities for certain additional purposes is allowed under the Dodd-Frank Act. Specifically, CFPB legal staff noted the act authorizes CFPB to use information gathered from various sources, including “examination reports concerning covered persons or service providers,” to conduct its market monitoring. They said they interpret this provision as permitting them to use information gathered as part of the supervisory process for other purposes, including market monitoring. For example, CFPB staff told us they needed data on various markets because within their first 18 months of operations they had to issue numerous rules including those relating to electronic transfers of consumers’ funds to recipients abroad (remittances), the characteristics of mortgages that would qualify lenders for protection from borrower lawsuits (qualified mortgage requirements), and prohibitions on incentives to steer borrowers to particular mortgage loans. CFPB staff told us the collections were necessary to help them understand the functioning of those markets and consumers’ experience with them. CFPB also had to obtain data on markets that were previously unregulated, such as payday lending, credit reporting, and private student lending. In addition to these large-scale collections, CFPB staff collect some consumer financial data from individual entities through the examination process, also under the agency’s supervisory authorities. CFPB staff told us that collecting consumer financial data during examinations is key to helping them carry out their mission to supervise markets. Such data allow CFPB’s examiners to better understand the institution under review and inform the decisions they make about what areas and activities to include in the scope of examinations. Staff told us they collect information throughout the supervisory examination process in order to assess risk to consumers from particular financial institutions and to monitor markets. For example, CFPB staff collect market and institution data from available sources (for example, during a baseline review of an institution, from commercial data vendors, or from their own research staff or other federal regulators) before collecting an institution’s consumer account information or internal documents relating to compliance management, such as training materials and internal policies. They explained that CFPB collected and analyzed data during the scoping phase to inform its supervisory staff about an institution’s activities and identify the risks the activities pose. Our analysis suggests that the scope and extent of the consumer financial data CFPB collected during individual examinations has varied. For example: We reviewed information request letters CFPB sent to a payday lender, debt collector, and credit reporting agency. In one of these letters, CFPB asked for detailed information about certain accounts, such as all new accounts or all consumer disputes within a certain review period. The data requests included account numbers, consumer contact records, and consumer disputes and their resolutions. We reviewed 46 examinations CFPB completed in 2012 and 2013 for 10 depository institutions that previously had been subject to prudential oversight by the Federal Reserve, OCC, or FDIC. Slightly more than half (25 of 46) of the examinations included requests for consumer financial data. During those examinations that included requests for consumer financial data, examiners sought data for a sample of accounts, such as accounts with deposit advance products. In other cases, examiners sought access to all accounts or loan applications, as with several mortgage or private student loan application examinations. Some CFPB examiners sought consumer financial data to verify the accuracy of mortgage loan data these institutions had been reporting to prudential regulators, pursuant to the requirements of the Home Mortgage Disclosure Act (HMDA). Representatives of the nine institutions we interviewed that had been providing consumer financial data to CFPB and the other regulators told us that CFPB’s examination-related requests were more extensive than the data requests from their prudential regulators. According to CFPB staff, some of the differences arise because CFPB needed to obtain more comprehensive information on institutions that might not have been subject to the same level of consumer protection oversight before passage of the Dodd-Frank Act or were conducting activities that had raised supervisory concerns. CFPB staff told us examiners generally request financial institutions’ account- and transaction-level data to conduct various analyses and test for compliance with relevant federal consumer financial laws, and they instruct institutions to alert CFPB if their prudential regulators already have collected the requested data, so that they can coordinate efforts. CFPB also has used its market monitoring authority, as well as voluntary data submissions, to collect data. Under the Dodd-Frank Act, CFPB is prohibited from obtaining information under its market monitoring authority from covered persons and service providers participating in consumer financial services markets for purposes of gathering or analyzing the personally identifiable financial information of consumers, and none of these collections appeared to include personally identifiable financial information. Data collected under the CFPB’s market monitoring authorities included automobile sales, consumer credit report information, mortgage loan performance, and online payday loans. CFPB purchased these collections from commercial data aggregators, and each collection was obtained either monthly or quarterly (except for data on online payday loans, a one-time purchase). Other financial regulators, banks, and other financial market participants use many of these same commercial databases (such as those covering credit report information and mortgages). CFPB staff also told us that several voluntary data collections have been instrumental for three statutorily required reports on consumer financial products and markets. For these reports, CFPB asked companies or industry associations to provide information on consumer financial products and services through voluntary, one-time collections. These voluntary collections included information on arbitration case records, consumer reports and credit reports, and private student loan data (described in table 1). The private student loan data collection informed CFPB’s analysis of the number of loan originations and their associated interest rates and allowed CFPB to determine any trends in lending in the private student loan market. CFPB found that the market for private student loans had increased from 2003 to 2007 and lender underwriting requirements loosened. Similarly, analysis of consumer credit report data informed CFPB’s report comparing consumer and creditor- purchased credit scores.CFPB’s use of consumer financial data in its reports. Like CFPB, the prudential regulators (FDIC, Federal Reserve, OCC, and NCUA) collect consumer financial data associated with products offered by the financial institutions they regulate. Staff from these regulators told us that they undertake the collections as part of their supervisory responsibilities to analyze markets that affect the institutions they oversee. For example, FDIC, OCC, and the Federal Reserve all obtain mortgage data, including loan origination dates, outstanding balances, and payment status, from commercial data aggregators similar to the aggregators CFPB has used. The Federal Reserve collects mortgage application data submitted under HMDA on behalf of CFPB, OCC, FDIC, NCUA, and the Department of Housing and Urban Development and aggregates these data on behalf of the Federal Financial Institutions Examination Council. Federal Reserve staff told us the Federal Reserve also purchases credit reporting data from credit reporting agencies. Furthermore, the Federal Reserve and OCC have ongoing data collections of credit card accounts that they obtain from financial institutions they supervise (using the same commercial data aggregator as CFPB). FDIC and NCUA staff told us FDIC and NCUA collect consumer financial data in their roles as insurers for banks and credit Table 2 provides information unions through the resolution process.about OCC’s, FDIC’s, and the Federal Reserve’s consumer financial data collections. Generally, the large-scale data collections by the prudential regulators do not contain information that directly identifies individuals. As noted in table 2, both the Federal Reserve and OCC collect address data as part of their mortgage collections to match first-lien mortgages to home equity loans and lines of credit on the same property, but do not identify individual borrowers by name. Several of the regulators told us that they routinely collect consumers’ personal information as part of their examinations of supervised entities but do not retain the information after the examination is completed. However, OCC told us that they generally only collect anonymized data from banks during examinations. The Federal Reserve, OCC, and FDIC staff told us that they use these collections for research on consumer markets affecting the financial institutions they supervise. For example, OCC began its credit card collection in 2009 and it analyzes these data to better understand the credit card market in which large national banks operate, determine the current status of banks’ credit card portfolios, and develop examination strategies. Like CFPB, OCC has contracted to have credit reporting agency attributes (such as the account holders’ number of other accounts, outstanding balances, and their payment status) appended to the credit card account data supplied by banks. OCC uses the mortgage data it collects to develop its quarterly public Mortgage Metrics report and to further analyze trends in the mortgage marketplace. The Federal Reserve relies on its credit card and mortgage data collections—part of institutions’ broader data submissions—to support its assessments of the capital adequacy of bank holding companies (stress testing) and to more effectively supervise large banks. Staff told us the Federal Reserve Bank of New York collects data on consumer credit reports to review anonymized consumers’ credit behavior over time and they have published several reports on these data. Federal Reserve staff and other researchers have used data from the Survey of Consumer Finances to issue numerous reports on trends in household wealth changes in the U.S. FDIC staff told us they use the mortgage data the agency purchases to conduct market and aggregate-level research and analysis. We also examined the data collections of four other federal agencies with consumer protection responsibilities and found their collections generally were less extensive than CFPB’s data collections. For example, SEC, which regulates the securities industry, and CFTC, which regulates the derivatives markets, collect only limited consumer financial data related to their roles in overseeing their respective industries. SEC staff told us the agency’s mission generally does not necessitate large collections of consumer financial data, but that staff obtain some consumer financial data as part of their efforts to oversee the entities the agency regulates and to enforce the federal securities laws. CFTC staff similarly told us their agency is not required to undertake any large consumer financial data collections, but does obtain limited amounts of such information when reviewing traders and auditing futures market participants. FTC, which is responsible for ensuring that consumers are protected from unfair or deceptive acts or practices, collects consumer complaint data to detect patterns of fraud and abuse. FTC compiles the data into a nonpublic database that is shared with other law enforcement agencies. Apart from this database, FTC staff told us that they review the complaints and other investigative information and generally do not compile other consumer information databases to detect fraud and deception. Staff from another agency that addresses consumer issues, the Consumer Product Safety Commission, also told us that their agency is not mandated to make any consumer data collections, but that they are required to maintain a public database containing complaints about consumer products that helps them promote the safety of consumer products. This agency also collects information relating to the causes and prevention of death, injury, and illness associated with consumer products. To minimize overlap and burden on financial institutions, CFPB has coordinated with the prudential regulators and shared consumer financial data through various formal agreements. The Dodd-Frank Act mandates that CFPB coordinate with the prudential regulators on its supervisory examinations of large banks and credit unions. CFPB supervisory staff told us that they interpret this mandate to include the sharing of information (which may include consumer financial data) collected during the examination process. As a result, CFPB has established a supervisory examination coordination framework that includes an overarching memorandum of understanding (MOU) on supervisory coordination with all the other prudential regulators for the sharing of supervisory information on an ongoing basis. 12 U.S.C. § 5581 (transferring certain consumer financial protection functions to CFPB). The Federal Reserve and Treasury had an MOU related to sharing information during the establishment of CFPB; CFPB staff told us that they are working on developing a separate information-sharing agreement with the Federal Reserve. NCUA, in addition to its information-sharing agreement with CFPB, also has an MOU for sharing consumer complaints with CFPB. In addition, CFPB has nine MOUs with seven separate federal agencies, including the U.S. Department of Justice and the U.S. Department of Housing and Urban Development. In some cases, the MOUs set up information-sharing arrangements and discuss coordination on efforts such as enforcement activities. We reviewed 26 MOUs CFPB has established with various state attorneys general, state banking regulators, two cities, and one American Indian tribe and found that they generally discussed information sharing and confidentiality and only one included a data-sharing arrangement for CFPB to receive consumer financial data from a state regulatory agency. We also reviewed three MOUs CFPB established with three private companies that allow CFPB to receive consumer financial data related to the manufactured housing loans industry and the payday lending industry. have an information-sharing agreement related to the sharing of large- scale collections of consumer financial data. CFPB has two information-sharing agreements relating to large-scale data collections—one current collection and one collection that was in development at the time of our review. In 2013, CFPB entered into an agreement with OCC covering any sharing of information from their respective credit card collections. As a result of this agreement, CFPB accesses account-level data from the 16 institutions from which OCC collects data in addition to the 9 institutions from which CFPB collects data. In total, the collections cover approximately 87 percent of outstanding credit card balances by volume. The agreement establishes ownership of the data and how OCC and CFPB will coordinate on the collection of credit card data, including which data fields to collect, what validation checks should be done to verify the data, the timing of the collections, and how the agencies should communicate. CFPB also established an interagency agreement with FHFA related to the development of the National Mortgage Database, which staff told us will provide a comprehensive view of the mortgage market and allow for greater mortgage market monitoring, supervision, and research. FHFA has reported that it is developing the database partly to facilitate mandatory reporting requirements under the Housing and Economic Recovery Act of 2008.provide any nonpublic data to the database, and FHFA and CFPB staff said that neither agency will directly collect the primary information for the database. Rather, staff said the agencies will purchase the data from a credit reporting agency. The credit reporting agency will provide an anonymized 5 percent sample—which will include about 3.5 million currently active mortgages—of first-lien, single-family mortgage loans CFPB staff told us that they do not plan to active as of 1998 or later that are reported to the credit reporting agency, as well as credit report information on the borrowers in the selected sample. FHFA staff told us they approached CFPB to collaborate on the database because they felt that CFPB would be interested in the mortgage data and they did not want to duplicate efforts. CFPB has been funding half of the costs associated with database development, but CFPB staff told us that their involvement as of July 2014 in the development of the database has been limited. Representatives from most of the financial institutions we interviewed said they observed CFPB coordinating with other prudential regulators during examinations at their institutions, with a few noting that coordination between the regulators had been increasing. However, others noted areas where CFPB could improve coordination. For example, the Federal Reserve’s Office of Inspector General, which conducts internal audits of CFPB’s operations, noted in a recent report that (1) CFPB did not consistently retain evidence of its information-sharing activities with prudential regulators and (2) CFPB could take additional steps to improve coordination with the prudential regulators by sharing draft supervisory letters (which describe the scope and findings of the examinations and highlight any corrective actions that should be taken) as part of these interactions. In addition, a 2013 report issued by the Bipartisan Policy Center, which studies federal regulatory issues, found that CFPB’s examination efforts focus more on the products that institutions offer rather than the examination efforts of the prudential regulators. The center’s report notes that the difference in approach means it can take CFPB longer to complete its examinations and can make coordination between the regulators more challenging. The center recommended that CFPB and the prudential regulators coordinate more closely to better integrate CFPB’s product-based approach and examination schedule with the other regulators’ approach. Although CFPB, OCC, and the Federal Reserve often collect different information from different financial institutions, our analysis found some similarities in the types of data collected and overlap in the financial institutions reporting to each regulator in their large-scale mortgage and credit card collections (submitted directly from the institutions themselves). The extent to which the same institutions provided the same types of data to CFPB, OCC, and the Federal Reserve is shown in figure 1 below. As shown in figure 1, CFPB and OCC collect similar credit card information, but from different institutions. However, as figure 1 also shows, four institutions that provide credit card data to CFPB also currently provide the same types of data to the Federal Reserve. However, staff from the two regulators noted that each uses its data for different purposes. As mentioned earlier, CFPB staff told us that they use the data obtained from their credit card collection to better understand card markets and help ensure compliance with federal consumer financial protection laws through the supervision and examination process. In contrast, the Federal Reserve uses the information in analyses that assess how changes in market conditions could affect the credit card accounts in ways that impact the safety and soundness of these institutions’ holding companies. Our analysis also found some overlap in the data collections of OCC and the Federal Reserve. Fifteen of the 16 national banks submitting credit card data to OCC submit similar data through eight holding companies to the Federal Reserve. In addition, 48 national bank affiliates that report mortgage data to OCC also report these data through their eight holding companies to the Federal Reserve. As a result, the holding company- level data the Federal Reserve obtains provides it with the information on these activities conducted by any national bank affiliates and non-national bank affiliates as well. OCC staff noted that OCC began its collection of mortgage data in March 2008 and credit card data in April 2009 in response to market events at the time. The Federal Reserve first proposed collecting similar credit card and mortgage data in February 2012. Subsequently, in a notice published in the Federal Register in June 2012, the Federal Reserve noted that its own collection effort was necessary because it needed data at the holding company level for its stress test analyses of the institutions, whereas OCC’s data collection was only from the institutions’ national bank affiliates. Federal Reserve staff told us that the purpose of their collection is to assess the impact of economic changes on credit card accounts and how this affects the soundness of the holding companies, whereas OCC staff told us that they use the credit card data they collect to monitor the status of the national banks’ credit card portfolios and identify potential issues to review in examinations. Staff from the Federal Reserve and OCC said they coordinated on development of their respective collections, aligned data fields, and established an information-sharing agreement to share account-level data for the institutions. However, OCC staff told us that coordinating the collections has been challenging and that one regulator can make changes to its respective collection without the consent of the other regulator. Limited quantitative information about costs of CFPB’s large-scale data collections is available. CFPB’s consumer financial data collections were not conducted through a rulemaking and therefore CFPB was not required to conduct a formal cost-benefit analysis before undertaking the collections. But CFPB has identified some of the costs its data collections pose. After obtaining CFPB contracts relating to data acquisition, we determined that since 2011 CFPB has entered into five contracts with private firms to obtain consumer financial data. These contracts cover as long as 5 years, with obligations totaling over $33 million over this span, although CFPB staff noted that they would not expend this entire amount if some option years are not exercised.staff reported that they were unable to quantify other costs borne by the agency, such as those relating to storing the data collected. CFPB staff acknowledged that their collections create costs and some burden for financial institutions, but also noted the primary benefit of these collections is that they provide data that inform much of CFPB’s work to protect consumers, including its supervisory process, rulemakings, and reports. For example, staff told us that the analysis from their ongoing credit card collections will provide input into the scope of some of CFPB’s 2015 supervisory examinations of credit card issuers and will continue to help determine the scope of the agency’s examinations. Staff also said that they used data from the credit card collection to inform part of CFPB’s “ability to repay” rule, which amended requirements so that card issuers no longer had to consider whether certain younger consumers have an independent ability to pay; the previous rule had affected the ability of nonworking spouses or partners of these consumers to obtain credit. Some of the consumer and industry groups we interviewed agreed that CFPB collections of consumer financial data produced benefits and said CFPB needed this type of data to carry out its mission. For example, one group explained that the data are needed to aid CFPB staff’s understanding of the financial products CFPB regulates. Two other groups said that CFPB’s data collections may be justified because the collections help CFPB regulate and monitor the markets. In addition, one of the two groups said that CFPB’s collections result in more informed decisions and actions, such as more targeted regulations. Furthermore, representatives from several privacy groups and one consumer group we interviewed noted that the commercial aggregators are more likely to be targeted by individuals seeking unauthorized access to consumer information than would CFPB. However, representatives of financial institutions we interviewed had differing perspectives on the relative costs and benefits of providing data to CFPB. Representatives of most of the financial institutions we interviewed said that CFPB requests for consumer financial data during supervisory examinations have been more extensive than those of their prudential regulators. They said CFPB’s data requests were broader and required more information than requests from their prudential regulators. However, some representatives offered explanations for these differences, noting that CFPB’s focus for its collections (consumer protection) is different from the focus of their prudential regulators and CFPB’s examiners must collect more data to familiarize themselves with the institutions they oversee. Representatives of five institutions providing credit card data to CFPB’s ongoing credit card collection generally reported that the initial submission process was burdensome, but costs to supply the data decreased over time. Representatives from several institutions reported that determining what data to submit to CFPB and how to provide them was particularly burdensome. For example, representatives cited challenges in consolidating different data on their credit card accounts from across different areas within their financial institutions and in establishing new internal procedures for reviewing the monthly data submission. However, the majority of financial institution representatives we interviewed stated that costs or the amount of time their staff spend preparing these data submissions has decreased over time. In contrast, representatives of some financial industry and business trade associations we interviewed expressed concerns about the burden CFPB data collections place on financial institutions. Several representatives cited the costs to produce consumer financial data for CFPB, suggesting it may be greater than collections for other regulators. The recent report by the Bipartisan Policy Center noted that the impact of CFPB’s data requests create real costs for entities because of the size of requests and institutions can be affected differently depending on their size. Letter of David Hirschmann, Senior Vice President, U.S. Chamber of Commerce, to CFPB Director Richard Cordray, June 19, 2013. agencies have provided, which other federal agencies or private organizations that experienced breaches have done. CFPB has taken steps to minimize burden on financial institutions. For example, instead of requesting that the same institutions provide CFPB with credit card data, CFPB entered into a MOU with OCC to share the credit card data they collect. CFPB staff also reported coordinating their examination information requests with the other prudential regulators. CFPB staff said that they try to inform other regulators 90 days before sending information requests so that they can coordinate information requests when possible. In its information request letters, CFPB instructs institutions to alert CFPB if the institutions’ prudential regulator already obtained the information CFPB requested. CFPB then would coordinate its requests with the regulator and alleviate the burden on the institution. In a letter to the House Financial Services Committee, an economist noted that CFPB could obtain a 1 percent sample of credit card accounts to achieve its goals and also reduce some of the concerns relating to the costs of providing an entire portfolio of data. We agree that providing samples of data, as opposed to entire portfolios of data, sometimes can reduce the burden on financial institutions. However, the regulators and institutions we interviewed and our analysis indicated obtaining only samples could hamper CFPB’s regulatory efforts and likely would not produce cost savings for institutions. CFPB uses samples in some examinations, but agency staff told us that they collect all accounts for the ongoing credit card collection—because the effort required to maintain a sample that represents the population of accounts would be burdensome to institutions. For example, if too many accounts in that sample are closed or the sample did not include newly created accounts, it would become less representative over time. Rather than having the institution re-sample its credit card population for each data submission, CFPB asks for all accounts on file. Similarly, OCC staff said that collecting samples of data on an ongoing basis would be less cost effective because they would have to redesign the sample for the institutions each time they wanted to conduct a different analysis. They said the additional requests for different samples would create further burden on the financial institutions.nearly all the financial institutions with whom we spoke said that supplying CFPB with a sample of credit card data rather than all accounts would not significantly reduce submission costs. They explained that providing a sample might reduce data storage costs, but the submission process would remain the same. GAO staff with expertise in research methodologies and statistics reviewed information related to CFPB’s credit card collection and agreed that that obtaining all accounts rather than a sample is likely more efficient for creating and maintaining a dataset that tracks changes in the same accounts over time. For example, over time, cardholders may close some accounts, which would create the burden and costs of continually adjusting the sample by adding other accounts to maintain the desired level of precision in the sample. CFPB has taken steps to protect the privacy of consumers and comply with requirements, restrictions, and recommended practices in the Dodd- Frank Act, PRA, Privacy Act, E-Government Act, and NIST guidelines. These steps include creating privacy policies, issuing public notices of data collection activities, and establishing a group to consider and assure the appropriateness of proposed data collections. However, CFPB has not yet developed and implemented written procedures for its data intake process, which includes a review of proposed data collections for compliance with specific Dodd-Frank restrictions (among other things) and for anonymizing data. The agency also does not have written procedures for obtaining and documenting PRA determinations. CFPB and OCC have an information-sharing agreement for their respective credit card collections, but OMB staff raised concerns that such an agreement may require OMB review and approval. In addition, OCC has not obtained OMB approval for its credit card and mortgage data collections under PRA. Finally, CFPB generally met statutory privacy requirements, but lacked elements of certain privacy controls. CFPB reviews data collections to determine whether the Dodd-Frank Act’s restrictions apply, but it has not yet created written procedures for its data intake process. Under the act, CFPB must not obtain personally identifiable financial information about consumers from the records of covered persons or service providers, unless (a) consumers provide written permission or (b) other legal provisions specifically permit or require such collections. However, CFPB legal staff cited the agency’s supervisory authorities as being among the legal provisions that permit collections of consumers’ personally identifiable financial information. In particular, these staff said that CFPB’s supervisory authorities permit the agency to compel entities it oversees to provide information (which, according to CFPB staff, may include consumers’ personally identifiable financial information) it needs to assess compliance with federal consumer laws and detect and assess risks to consumers and markets. Therefore, information obtained through supervisory activities, such as information requested during examinations of large depository and certain nondepository institutions, does not violate the restriction on collection of personally identifiable financial information without consumer permission or as permitted by law, according to CFPB’s legal staff. As mentioned earlier, the act has an additional restriction that prohibits CFPB from using its market monitoring authority, which enables CFPB to gather data to monitor risks to consumers to support its rulemaking and other functions, to obtain records from covered persons and service providers participating in consumer financial services markets for the purposes of gathering or analyzing the personally identifiable financial information of consumers. CFPB defines “personally identifiable financial information” by regulation and states that data that do not contain direct personal identifiers such as account numbers, names, or addresses do not meet the definition of “personally identifiable financial information.” As shown in table 3, most of the data collections CFPB obtained do not directly identify individuals and therefore, according to CFPB staff, their contents do not meet the definition of “personally identifiable financial information.” We analyzed the data fields and field descriptions for CFPB’s 12 consumer financial data collections in our review and found that 9 collections do not include direct personal identifiers among the fields of information CFPB obtained. To the extent that the providers of the data (e.g., financial institutions and credit reporting agencies) had direct personal identifiers in their internal systems related to the nine collections, they did not include them in the data they provided to CFPB. We also visually inspected data extracts for 4 of these 9 data collections—credit cards, consumer credit report information, overdraft fees, and private student loans—and further verified that the data in the fields CFPB obtained did not directly identify individuals. Three of the data collections directly identified individuals using names and addresses. CFPB staff said that the Dodd-Frank restrictions did not apply to one of the three collections—arbitration case records—because the entity that voluntarily provided the data was not a covered person or service provider from which CFPB was prohibited from obtaining personally identifiable financial information. According to CFPB staff, the other two data collections that included direct personal identifiers— deposit advance products and storefront payday loans— were obtained from covered persons but do not violate Dodd-Frank restrictions because they were collected under CFPB’s supervisory authority and not its market monitoring authority. CFPB has taken steps, such as adopting informal procedures for anonymizing data, to remove the information that directly identifies consumers from these collections before the data are made available to staff for analysis that may have nonsupervisory purposes. In particular: Deposit advance products: CFPB staff showed us that for deposit advance products, each institution had provided the consumers’ personal data—including names and addresses—in a file separate from files containing the account and transaction data. CFPB staff who analyze these data had not been granted access to these files. We reviewed files from two of the institutions that submitted data and confirmed that consumers were not directly identified in the files available to staff who work on large-scale data collections. Storefront payday loans: A CFPB staff member made copies of the files sent by the payday lenders and removed all the directly identifying information—names and complete address information. We observed files from three of the lenders that submitted data and confirmed that consumers were not directly identified in the files available to staff who analyze these data. The original files were stored in a separate file directory with restricted access, and CFPB staff who analyze these data did not have access. CFPB staff also applied similar procedures to the overdraft fees data collection. This collection did not contain fields that directly identified consumers but had information—nine-digit zip codes (five-digit zip code plus the four-digit geographic extension)—that CFPB staff said they considered sensitive. CFPB staff replaced the zip codes with a randomly generated number string to further anonymize individuals. Staff stored the files with the original zip codes and the match key to the random strings that had replaced the zip codes in a separate location from the analysis files. We reviewed files from two of the institutions that submitted data and confirmed that consumers were not directly identified in the files available to staff who analyze these data. For initial determinations of whether Dodd-Frank restrictions apply to a particular data collection, CFPB staff stated they rely on the legal division to raise any concerns during Data Intake Group meetings when proposals for data collections are considered. Legal staff said that their reviews consist of determining whether the data will be collected from a covered person or service provider, whether the collection will include personally identifiable financial information, and whether CFPB plans to anonymize a collection that contains personally identifiable financial information. Under current practice, legal staff participating in the Data Intake Group complete a section of a worksheet to indicate under what authority CFPB is obtaining the data and the type of agreement (such as a contract or interagency agreement) by which it would obtain the data. The legal division must review and vote to proceed on every proposed data collection before data may be brought into or collected by CFPB. However, CFPB staff told us the Data Intake Group as a whole does not have written procedures to guide its reviews. They said they recognized the lack of written procedures and documentation requirements for the Data Intake Group was a weakness in their current practices. CFPB’s Chief Information Officer told us that CFPB established a data governance working group to formalize CFPB’s information governance policies, procedures, and responsibilities, including those of the Data Intake Group. The working group’s initial product, the June 2014 information governance policy, states that information to which access is restricted by law, including the Dodd-Frank restrictions, must be treated in The policy also incorporates existing accordance with such restrictions.policies and lays out high-level principles, guidelines, and responsibilities for the intake, management (including storage, internal sharing and access, and use), disclosure, and disposition of information. However, the standards and written procedures to implement this policy are still under development, according to CFPB staff. Federal internal control standards and guidance discuss the importance of having written documentation and procedures for control activities. CFPB staff said that in setting up the new agency, they emphasized adopting practices that help ensure that these issues were addressed but had not yet formally defined roles, responsibilities, and documentation requirements for the Data Intake Group in written procedures. Establishing such procedures for its data intake process will help CFPB ensure that staff consistently take appropriate steps when evaluating proposed data collections, including reviewing them to determine whether the Dodd-Frank restrictions apply. In addition to Dodd-Frank restrictions that CFPB must follow, NIST guidance calls for agencies to minimize collection and use of personal information when possible. Although CFPB has taken steps to minimize use of information that directly identifies individuals in certain data collections, the agency has not developed standard policies and written procedures to document the practices it uses for anonymizing data, including clarifying how data sensitivity will be assessed and defining specific roles, responsibilities, and steps in accordance with NIST guidance and privacy controls. We found instances in which the agency failed to fully remove sensitive information in some of its data collections, as described below. Data identifiability and sensitivity: NIST recommends that agencies evaluate how easily data fields can be used to identify specific individuals, the sensitivity of individual data fields, and the sensitivity of groups of data fields. Federal internal control standards also call for appropriate documentation of decisions and control activities. According to CFPB staff, discussions among staff about removing sensitive data elements were informal and not documented. CFPB also has not specified in policy which data fields are considered sensitive or potentially identifying and should be removed or masked. Our observation of files from two of the institutions providing deposit advance products data found that, although no individuals were directly identified in the files used for analysis, one file from one institution (with more than 1 million records) contained the nine-digit zip code for each record. According to CFPB staff, this information should not have been in that file. In the consumer credit data, CFPB procured the complete data package the credit reporting agency offered (a standard product), which included marketing data, without any personal identifiers. The marketing data contained demographic characteristics—including a religion variable that the credit reporting agency obtained or developed internally—on the sample of individuals whose credit record information CFPB obtained. When we observed an extract of the data with CFPB staff, we noted the existence of the religion variable. Not all CFPB staff realized the database contained that information. They said that CFPB considered this particular information sensitive and staff would remove it from their database. The Privacy Act generally prohibits federal agencies from maintaining records describing how any individual exercises rights guaranteed by However, as noted previously, to be a record the First Amendment.under the Privacy Act, information about an individual must contain the person’s name or other identifier, and the information CFPB acquired did not contain personal identifiers. Minimization of personal information used in research: NIST recommends that agencies (1) develop policies and procedures that minimize the use of personal information for research and other purposes and (2) implement controls to protect personal information used for research and other purposes. However, CFPB staff said they have not developed written procedures for removing personal identifiers from supervisory data or implemented controls such as requiring reviews of anonymized files to ensure that all fields with information that directly identifies individuals had been removed. NIST SP 800-53, Revision 4. personal information as possible and they expect staff to know which data elements are sensitive and should be removed. CFPB staff also said they had not performed formal assessments of the sensitivity of data elements because they have no plans to release such data publicly. However, some privacy experts have noted that re-identification of consumer financial data has become easier with the increase of online databases and the rise of “big data.” For example, a recent report found that anonymization strategies used in the past may not be robust enough in light of current and emerging technology and techniques. Some privacy experts have noted that only removing direct identifiers from a database generally does not sufficiently anonymize the data. However, two researchers noted that the risk of re-identifying data that have been properly anonymized likely is overstated because identifying large numbers of individuals in many anonymized data is difficult and takes specialized expertise. Nevertheless, written policies and procedures for assessing the sensitivity of data fields and removing sensitive data fields would allow CFPB to comprehensively assess its data collections to help ensure they are sufficiently anonymized and contain no unnecessary sensitive information. In turn, such formal assessments would enhance CFPB’s assurance that the privacy of the consumer financial data in its data collections has been adequately protected. 5 C.F.R. § 1320.3(c)(4)(ii). estimates of burden, ways to enhance the quality, utility, and clarity of the information collected, and ways to minimize the burden of the collection on respondents before submitting the proposal to OMB. According to CFPB staff, at the time they began these various large-scale data collections, CFPB staff considered their contents and determined that, for two reasons, they did not need to seek formal OMB approval under PRA for these collections. First, CFPB procured much of the data from private companies that serve as information resellers. Such data are commercially available products and did not constitute information collections under PRA, according to CFPB staff. Second, when CFPB collected the information itself, staff said the agency did not ask exactly the same questions of more than nine financial institutions, which would have necessitated OMB approval. But CFPB does not have written procedures for consistently and appropriately documenting PRA determinations (both internally and from OMB) for proposed collections. Instead, CFPB staff told us that initially colleagues with PRA expertise (PRA team) provided internal training on PRA, including the need to consult with the team and OMB staff about determinations. After the Data Intake Group was established in 2013, the PRA team began participating in the group’s meetings to ensure discussions about PRA applicability and the need for OMB consultation took place. CFPB staff said that under current practice, staff from the PRA team who participate in the Data Intake Group complete a section of a worksheet to specify the number of respondents in the collection, the share of the market, whether OMB approval is required, and the status of OMB approval. E-mails that document the Data Intake Group decisions that we reviewed include information on whether the group determined that PRA applied, but do not specify whether OMB was consulted or the basis for determinations by the PRA team. In the specific case of CFPB’s credit card collection and its information- sharing agreement with OCC, CFPB did not have appropriate documentation of its consultations with OMB about PRA applicability. In 2012, CFPB sought to collect monthly account data from nine credit card issuers while also obtaining (through an information-sharing agreement) nearly identical data OCC had been collecting since 2009 from 9 different issuers. According to internal June 2012 e-mails that we reviewed, CFPB staff stated that by collecting information from 9 of more than 3,000 issuers, neither CFPB nor OCC were triggering PRA requirements under the “substantial majority of an industry” standard in OMB regulations. These e-mails also indicated that CFPB staff had discussed with OCC whether PRA approval from OMB was needed and learned that OCC had not sought OMB approval for its ongoing collection. In the same e-mails from June 2012, the CFPB PRA team reported that OMB had said (1) CFPB could collect information from nine banks, (2) OCC could continue to collect information from another nine banks, and (3) OCC could share its data with CFPB without going through the PRA process. CFPB staff told us that they did not obtain documentation from OMB staff because the advice had been received by telephone. However, OMB staff with whom we spoke said that they were not aware of either CFPB’s or OCC’s credit card collections and did not recall a discussion with CFPB about its collection. OMB staff also could not find documentation of such a discussion, although they acknowledged that informal telephone consultations generally are not documented. Furthermore, OMB staff told us that an information-sharing agreement that could result in agencies bypassing the requirements of PRA—in particular, the public notice and comment provisions—by collecting data from more than nine entities would warrant closer examination to ensure PRA compliance. They added that OMB would want to assess whether these collections met the “substantial majority of an industry” standard, which also would necessitate the need for a formal PRA review. Federal internal control standards call for appropriate documentation of decisions and control activities. In the initial period of agency operations, CFPB staff said they emphasized complying with the requirements over documenting decisions and establishing written procedures. However, without written procedures for consistently and appropriately documenting PRA-related decisions and discussions with OMB, including about the credit card collection and information-sharing agreement, CFPB lacks reasonable assurance that its collections are conducted in compliance with requirements intended to help avoid inefficiencies and minimize burden. In our review of OCC’s collections of credit card and mortgage data, we found information indicating that OCC was now obtaining data from more than nine entities in these collections, which would require OMB approval. OCC began collecting credit card data from nine institutions in 2009, and OCC staff told us that they had determined that they did not need an OMB PRA review because they were obtaining data from fewer than 10 institutions. In addition, OCC reported that the nine institutions represented less than 1 percent of all national banks and federal savings associations, which therefore did not meet the “majority of the industry” provision in OMB’s regulations. However, in the information-sharing agreement with CFPB, OCC listed nine entities as “reporters” (reporting institution) for the credit card collection, but also indicated several instances in which two or more national banks that were part of the same holding company were combined into a single “reporter.” After our analysis showed that OCC had data from more than nine entities, OCC staff told us they reviewed the information requests they had sent to financial institutions and confirmed the requests were sent to institutions beyond the original nine. They said that as of July 2014, 16 entities were providing credit card data to OCC. As a result, they planned to submit the collection to OMB for approval under PRA. In addition, after reviewing OCC’s information-sharing agreement with the Federal Reserve for its first-lien mortgage and home equity data collections, we found that OCC was collecting data from 61 entities for first-lien mortgage data, and 64 entities for home equity data. On September 5, 2014, OCC published notices in the Federal Register describing these collections in advance of submitting them to OMB for approval. Until completing steps to obtain OMB’s approval for each of these three data collections, OCC will lack reasonable assurance that the collections are in compliance with statutory requirements intended to minimize burden on the financial institutions and maximize the practical utility of the information collected. CFPB has taken steps to comply with other requirements and recommended controls aimed at protecting the privacy of personal information that include publishing notices about information collections and adopting policies. Beyond the specific Dodd-Frank requirements and general PRA requirements, CFPB also is subject to the Privacy Act and the privacy provisions of the E-Government Act, which require all federal agencies to conduct certain steps when collecting data that includes personal or direct identifiers of individuals. CFPB follows NIST guidance in implementing privacy controls, which are designed to facilitate compliance with these statutes. CFPB has published notices as required under the Privacy Act, which is intended to regulate agencies’ collection, maintenance, use, and dissemination of information about individuals. Under the Privacy Act, federal agencies must publish system of records notices (SORN) in the Federal Register if they plan to maintain, collect, use, or disseminate records about individuals that are retrieved from a system of records by the name of an individual or other personal identifier. are required for systems of records operated by contractors on behalf of an agency. According to CFPB staff, most of CFPB’s consumer financial data collections are not a system of records and do not require the issuance of a SORN because the data are not typically retrieved by personal identifiers, which is necessary for the information to be covered by the Privacy Act. However, CFPB issued three SORNs relevant to our review that provide public notice covering other information the agency obtains during the course of its operations and which its staff may at times retrieve using personal identifiers. Two of the SORNs (both published in August 2011) are for its supervision databases, which cover data collected from and about the depository and nondepository institutions CFPB supervises. The third SORN (published in November 2012) SORNs must identify the type of data collected, the types of individuals about whom information is collected, and procedures that individuals can use to review and correct personal information. covers market and consumer research records. Consistent with the Privacy Act, the SORNs included a general description of CFPB’s authority and purpose for collecting and using personal information and how individuals could access and correct information maintained about them. The inclusion of these elements is also consistent with NIST privacy controls.before CFPB collected information maintained in these systems. CFPB staff told us that the SORNs were published While most CFPB data collections likely do not constitute systems of records, agency staff said other activities involving personal information, such as matching across databases, also were covered by the SORNs they had issued. For example, CFPB staff said that a matching process conducted by a third party on behalf of CFPB on the credit card data uses personal identifiers obtained from supervised depository institutions to retrieve records in the third party’s system. To the extent this matching process creates a temporary system of records by virtue of retrieving the records by personal identifiers, CFPB staff told us the need for public notice is met by the SORN issued for the depository institution supervision database. According to CFPB staff, the third party immediately removes all identifiers once the records have been matched and then transmits the resulting database to CFPB’s contractor, which transmits the data to CFPB. Staff said that neither CFPB nor its contractor retrieves records in the database by personal identifier and therefore the database CFPB maintains does not constitute a system of records under the Privacy Act. The E-Government Act has provisions that require federal agencies to review data to help ensure sufficient protections for the privacy of personal information held electronically. Federal agencies subject to this act must conduct privacy impact assessments (PIA) that analyze how personal information is collected, stored, shared, and managed in their information systems. Agencies must make PIAs public to the extent practicable, although this requirement can be modified or waived for security reasons or to protect sensitive or private information. To comply with these requirements, in June 2013 CFPB prepared a PIA for its “Cloud 1” general support system (GSS), the information system in which it maintains its consumer financial data collections.completed the PIA (which has not been made public) as part of the security assessment and authorization process for the GSS. However, we found that the PIA discussed consumer financial data very generally and contained few details about the privacy risks raised by collections of consumer financial data. After completing the GSS PIA, CFPB staff said that they subsequently changed the focus of PIAs from assessments of information systems to assessments of categories of data collections. They said this change would provide clearer information to the public about privacy risks for specific data collections. As part of this new privacy analysis, they published a PIA for market analysis of administrative data under research authorities in December 2013 and another PIA covering the use of supervisory data for market research in July 2014. Both PIAs note that re-identification of individuals is a risk posed by the data collections. However, in both PIAs CFPB states that its staff will not attempt to re- identify individuals in databases that are anonymized. In addition, CFPB is contractually prohibited from attempting to re-identify individuals in at least one data collection procured from a contractor. Table 4 summarizes which of CFPB’s data collections are covered by PIAs. CFPB has implemented certain controls intended to ensure the proper treatment of consumer financial data obtained, but CFPB has not yet developed documentation or implemented plans, procedures, programs, and training as specified in several controls. NIST’s guidance includes controls for protecting privacy and ensuring the proper handling of personal information.NIST-recommended controls, including the following: CFPB has taken actions to address many of the Data quality and integrity: CFPB included data quality provisions in its contract with the aggregator for the credit card data collection. These provisions outlined quality assurance steps for the aggregator to take to help ensure the accuracy and completeness of the information the credit card issuers provided. In addition, CFPB has issued information quality guidelines for information it publishes. These steps are consistent with control steps calling for an organization to check for (and correct as necessary) inaccurate information and issue guidelines maximizing the quality, utility, objectivity, and integrity of disseminated information. Security: CFPB has adopted a privacy incident response plan and standard operating procedures for such incidents. The privacy incident response plan included most of the components recommended by NIST: the establishment of a privacy incident response team; a process to determine whether notice to oversight organizations or affected individuals is appropriate; a process to assess the privacy risk posed by the incident; and internal procedures to ensure prompt reporting by employees and contractors of any privacy incident to appropriate officials. The CFPB privacy team has created a log for privacy incidents and completed after-action reports that detail what was reported, what the investigation found, and what steps were taken. These steps are consistent with the control for an organization to develop and implement a privacy incident response plan and provide an organized and effective response to privacy incidents in accordance with the plan. Use limitation: CFPB has entered into MOUs with federal and state agencies, which describe the purposes for which personal information (and other nonpublic information) may be used by receiving parties. CFPB also developed a policy for staff that governs when confidential information may be shared with external parties. These steps are consistent with control steps for information sharing with third parties, which call for an organization to enter into MOUs that specify the personal information covered and enumerate the purposes for which it may be used. Accountability, audit, and risk management: CFPB has outlined privacy roles, responsibilities (which include safeguarding nonpublic, business-sensitive, confidential, or personal information or data), and access requirements for all CFPB contractors and service providers in various policy documents. General privacy and confidentiality clauses were included in several contracts we reviewed. CFPB subsequently prepared guidance for its staff on privacy-specific clauses to be included in data collection and analysis contracts. In addition, members of the Data Intake Group review contracts to help ensure that contracts for collections that include personal information are flagged to include appropriate privacy clauses. These steps are consistent with the control for an organization to establish privacy roles, responsibilities, and access requirements for contractors and service providers, and to include privacy requirements in contracts and other acquisition-related documents. However, CFPB does not yet have (1) a comprehensive privacy plan incorporating its various privacy policies and guidance; (2) a documented privacy risk management process; (3) a comprehensive, documented program for monitoring and auditing privacy controls or a regularly scheduled independent review of its program; and (4) role-based privacy training, as specified in other controls relating to accountability, audit, and risk management. Comprehensive privacy plan: NIST’s “governance and privacy program” control calls for agencies to develop a strategic organizational plan for implementing applicable privacy controls, policies, and procedures. CFPB has developed a number of privacy policies and guidance documents, including a high-level privacy policy, a handbook for sensitive information, a PIA policy (which takes effect in December 2014), a PIA template, and guidance for preparing SORNs. However, CFPB privacy staff stated that they had not yet brought them together to develop a comprehensive plan that covers all of CFPB’s privacy operations. Documented privacy risk-management process: NIST’s “privacy impact and risk assessment” control calls for agencies to document and implement a process for privacy risk management that assesses risks to individuals resulting from collecting, sharing, storing, transmitting, using, and disposing of personal information. Agencies also should conduct PIAs for information systems, programs, or other activities that pose a privacy risk in accordance with applicable law, OMB policy, or organizational policies or procedures. Supplemental guidance for this control states that tools and processes for managing risk “include, but are not limited to, the conduct of PIAs.” We reported on the importance of assessing privacy risks to help program managers and system owners determine appropriate policies and techniques to implement the policies. Currently, CFPB follows an informal process for managing privacy risks that does not fully document risks involved with data collections or the methods CFPB plans to use to address the risks. CFPB staff stated that the agency has centralized all privacy activities with the Chief Privacy Officer and a team of CFPB staff (privacy team), who perform all assessments of privacy risks, primarily through the PIA process. In June 2014, CFPB adopted a formal PIA policy that will take effect in December 2014 and that places responsibility with the Chief Privacy Officer for determining whether PIAs are required. But the policy does not specify the procedures to be used and documentation required for making such determinations. CFPB staff said that privacy team representatives to the Data Intake Group currently use a worksheet to assess and document whether proposed collections require a PIA and whether an existing PIA would cover the collections. The privacy team follows CFPB’s PIA template to create new PIAs. However, CFPB staff said there are no written procedures they follow to guide their assessments, although such procedures are being developed. CFPB staff were not always able to clearly identify whether a specific data collection required a PIA and if so, which one, because these prior determinations had not been documented. Furthermore, staff said they do not document discussions or analyses that lead to the conclusions published in their PIAs. Comprehensive, documented program for monitoring and auditing privacy controls: According to NIST’s “privacy monitoring and auditing” control, agencies are to monitor and audit privacy controls and internal privacy policy to ensure effective implementation. NIST guidance calls for regular assessments and mentions external audits as means to obtain these assessments. CFPB is required to have an annual independent audit of its operations and budget. CFPB identified the areas that the auditor reviewed and selected its privacy programs, policies, and processes as one of the areas for review in 2012. In its 2012 report, CFPB’s independent auditor identified the lack of a formal privacy compliance program as an opportunity to improve performance. CFPB staff said they had addressed this finding, noting their policy for annually reviewing SORNs and PIAs and a worksheet CFPB prepared for an information technology application that lists the NIST privacy controls and the steps CFPB had taken to implement them. CFPB staff said they planned to complete similar worksheets for each new application and system. However, staff do not plan to prepare such worksheets for applications and systems already in place, and have not established procedures for reviewing and updating the worksheets. Although CFPB has adopted a checklist for reviewing SORNs, staff said they do not have a similar checklist or documentation requirements for reviewing PIAs. In addition, CFPB staff said they had not had an external audit of their privacy practices other than the 2012 audit. Role-based privacy training: NIST’s “privacy awareness and training” control calls for agencies to administer targeted, role-based privacy training (in addition to basic privacy training that all staff receive) for personnel having responsibility for personal information or for activities that involve personal information. CFPB has developed a training process for privacy awareness, and CFPB employees and contractors receive privacy training as part of annual security awareness training. CFPB also trains staff on the treatment of confidential supervisory information, which may contain consumers’ personal information. CFPB staff said they adopted organization-wide privacy awareness training from the Department of the Treasury, but were still developing role-based privacy training and did not yet have an estimated date for completion. CFPB staff said that, in setting up the new agency, they had adopted policies and practices that would address privacy issues. However, the agency has not yet comprehensively documented its policies and procedures or completed development of its role-based privacy training. By not fully implementing controls to include written procedures, comprehensive documentation, a regular review of its privacy practices, and targeted training, CFPB is hampered in its ability to identify and monitor privacy risks and ensure the proper handling of personal information. CFPB has taken actions to protect the consumer financial data it has collected from unauthorized disclosure, but some documentation lacked key information and its evaluation of how a service provider protects data was not comprehensive. CFPB has established an information security program, implemented controls to protect access to sensitive data, and assessed the risks of its consumer financial data collections using a risk- management framework that adhered to federal information security guidance. However, CFPB’s documentation of its risk assessments and remedial action plans to correct identified weaknesses for the information system and related components that maintain and process consumer financial data lacked key elements. Further, the initial evaluation CFPB had completed of its service provider was not sufficiently comprehensive. In a 2013 audit of the CFPB’s information security program, the Office of Inspector General (OIG) for the Federal Reserve and CFPB determined that CFPB had taken multiple steps, consistent with FISMA requirements, to develop, document, and implement an information security program. FISMA requires agencies to develop, document, and implement an information security program. OMB and the Department of Homeland Security (DHS) have instructed agencies to report annually on a variety of metrics, which are used to gauge implementation of information security The OIG reported that CFPB’s overall information security programs.program in 2013 was generally consistent with requirements in 6 of 11 information security areas outlined in DHS reporting instructions: (1) identity and access management; (2) incident response and reporting; (3) risk management; (4) plan of action and milestones; (5) remote access management; and (6) contractor systems. For a seventh area—security capital planning—the OIG noted that CFPB has been taking sufficient actions to establish a security capital planning program in accordance with DHS requirements. The OIG identified several opportunities to improve CFPB’s information security program through automation, centralization, and other enhancements. Specifically, the agency had not defined metrics to facilitate decision-making and improve performance of its information security continuous monitoring program; implemented tools to more comprehensively assess security controls and system configurations; developed and implemented an organization-wide configuration management plan and consistent process for patch management; designed, developed, and implemented a role-based training program for individuals with significant security responsibilities; or fully implemented a capability to centrally track, analyze, and correlate audit log and incident information. In addition, CFPB’s contingency planning for a selected system needed improvement. The OIG made four recommendations to address these issues, with which CFPB concurred. The OIG noted that CFPB’s planned actions were responsive to the recommendations, but as of July 2014, the recommendations remained open pending the OIG’s review during the 2014 FISMA audit. CFPB had implemented several logical access controls for the component of the information system that maintains the consumer financial data collections we reviewed and was scanning for problems or vulnerabilities. Agencies can protect the resources that support their critical operations from unauthorized access by designing and implementing controls intended to prevent, limit, and detect unauthorized access to computing resources, programs, information, and facilities. Inadequate access controls diminish the reliability of computerized information and increase the risk of unauthorized disclosure, modification, and destruction of sensitive information and disruption of service. As part of assessing the controls that CFPB uses to protect consumer financial data it collects, we reviewed the logical access controls the agency implemented on the primary servers that staff use to process and store these data. As one of the ways CFPB seeks to mitigate the risk of unauthorized re-identification of individuals, the agency has controlled which staff have access to data collections that directly identify individuals. Based on interviews with CFPB staff who manage these servers and observations and reviews of information technology security controls and settings we determined that CFPB had installed and configured automated tools to perform regular configuration management and periodic security scans of the servers supporting the component. CFPB also leveraged its existing centralized account management system to manage who had access to these servers. The agency also had implemented controls intended to prevent staff members who are not on the approved list from accessing consumer financial data. According to CFPB, there have been no security incidents resulting in unauthorized disclosures of information in the information system component that maintains consumer financial data. In July 2014, the Federal Reserve and CFPB OIG issued a report on a review of the cloud-based system in which CFPB maintains its consumer The OIG made recommendations to address financial data collections.weaknesses they identified in work conducted in fall 2013, including recommending improvements to CFPB’s procedures for system and information integrity and configuration management. OIG staff told us that CFPB has implemented corrections in these areas. The OIG report also recommended that CFPB take actions related to contingency planning and incident response; CFPB actions to address these were still underway. Although the collection of consumer financial data can create concerns over improper use or unauthorized disclosure, CFPB has taken steps to assess the risks posed by these data and implemented controls or taken other actions to address the risks. NIST has published a risk- management framework, which recommends that agencies follow a six- step process involving (1) security categorization; (2) security control selection; (3) security control implementation; (4) security control assessment; (5) information system authorization; and (6) security control monitoring.management activities into the system development life-cycle. The framework integrates information security and risk- CFPB has adopted this framework as the basis for the process it uses to assess the risks of the consumer financial data and other data it collects and generally applied the framework to the information system and related components that maintain, process, and store the consumer financial data collections we reviewed. To address the steps of the risk- management framework, CFPB generally completed information security documentation required by FISMA (such as risk assessments, system security plans, and remedial action plans) or outlined in NIST guidance (including security assessment plans and reports) to address the steps of the risk management framework. Table 5 provides additional information on CFPB’s actions to implement the framework for the information system and its components. Although CFPB generally completed the information security documentation required by FISMA or outlined in NIST guidance for implementing the risk management framework, several key elements were missing from various documents. NIST has issued guidance for conducting risk assessments that outline elements that should be included in documentation of risk assessment results. In addition, CFPB adopted a risk management process and guidance for preparing remedial action plans that outline its internal documentation requirements. Conducting and documenting risk assessments help ensure agencies fully assess the risks of data they maintain and apply appropriate protections, but CFPB’s risk assessment documentation did not include all the elements NIST guidance recommends for communicating results of risk assessments. Specifically, the three risk assessment and recommendation forms we reviewed did not include the following essential elements identified by NIST: (1) the assumptions and constraints under which the risk assessment was conducted; (2) information sources to be used in the assessment; (3) the risk model and analytical approach used in the risk assessment; (4) threat sources; (5) potential threat events; (6) vulnerabilities and predisposing conditions that affect the likelihood that potential threat events will result in adverse impacts; (7) the likelihood that potential threat events will result in adverse impacts; (8) the adverse impacts from potential threat events; and (9) the risk to the organization from threat events. The results of a fourth risk assessment were documented in a security assessment report. Although the security assessment report identified the assumptions and constraints under which the risk assessment was conducted and the risk model that was used (elements 1 and part of 3 above), it did not include the other elements listed above. Enhancing its documentation of risk assessment results to be more comprehensive and consistent would help CFPB demonstrate that it has effectively assessed risk and identified and considered all threats and vulnerabilities to its operations. Remedial action plans can assist agencies in tracking and ensuring that information security weaknesses are addressed in a timely way. However, CFPB’s remedial action plans did not always include all the weaknesses identified. CFPB policy states that weaknesses identified during internal and external system reviews should be included in remedial action plans. We compared the CFPB system documentation and testing results with the remedial action plans for the information system and related components that maintain consumer financial data and found instances in which not all security weaknesses identified were included in the plans. For example, the system documentation for its information system identified 20 controls we reviewed that were not fully implemented—16 were listed as partially implemented, 3 were listed as planned for implementation, and 1 was listed as a new control. However, the 3 planned controls and 1 new control were not recorded in the system’s remedial action plan. In addition, CFPB had not remediated all weaknesses in its remedial action plans by their scheduled completion date in accordance with CFPB guidance. CFPB guidance included required completion dates for remediating all high-risk weaknesses (within 30 days), all medium-risk weaknesses (within 90 days), and all low-risk weaknesses (by the scheduled completion date documented in the remedial action plan). In addition, CFPB guidance states that staff should assign scheduled completion dates, which may extend beyond the required completion dates, based on realistic timelines given agency priorities and available resources. Of the 16 partially implemented controls we reviewed that were recorded as weaknesses in the remedial action plan, CFPB had not completed remedial steps for 9 weaknesses by the scheduled completion date of September 2013. Further, the scheduled completion dates had not been updated to reflect current plans for remediation. CFPB’s testing of one of the system components that maintains and processes consumer financial data identified three high-risk weaknesses that were scheduled to be remediated by October 2013. One weakness was the aggregate risk posed by numerous medium- and low-risk findings identified during testing and automated scans. The remedial action plan required CFPB to analyze each finding to determine its risk impact and prioritize them for remediation or mitigation. According to the Chief Information Security Officer, CFPB has been making steady progress towards remediating the findings that make up this high-risk weakness; however, as of June 2014, CFPB had not addressed all these findings or updated the scheduled completion date in the remedial action plan to reflect the current timeline for completing these actions. Ensuring that its remedial action plans are comprehensive and updated to reflect current timeframes for remediating weaknesses would enhance CFPB’s ability to identify, assess, prioritize, and monitor the progress of corrective efforts for security weaknesses. Under the FISMA information security program, agencies are required to develop a risk-management process that helps ensure that information and information systems provided or managed by another agency, contractor, or other source are protected with appropriate information security controls. CFPB has developed a risk management process that covers all agreements and contracts between CFPB and service providers that process information on its behalf. One step in the process is conducting assessments of these service providers based on the types of applications, tools, or services provided. Once the appropriate assessment is conducted, CFPB generates a risk assessment and recommendation form, including appropriate risk mitigation activities. The form is then submitted to the appropriate approval authorities. CFPB also tracks the implementation of its recommendations by creating risk mitigation items and activities that will be tracked through CFPB’s remedial action plans. Some actions have been taken to assess whether the service provider that processes consumer financial data on CFPB’s behalf was implementing adequate protections. We reviewed a 2012 report by the service provider’s independent auditor about certain controls the provider had in place. The independent auditor found the controls were suitably designed to provide reasonable assurance that control objectives would be achieved and that the tested controls were operating effectively throughout the review period. Officials from the provider told us that other federal agencies for which they process data also have reviewed their information security program. In addition, they said they had never had a breach of the environment in which they process and maintain CFPB’s data. CFPB also conducted an initial review of this service provider to assess the risks associated with utilizing the providers’ systems and services, and documented the findings and recommendations, which were reviewed and approved in March 2014. CFPB’s contract with the service provider includes specific information security requirements and states that CFPB shall conduct annual reviews to help ensure security requirements in the contract are implemented, enforced, effective, and operating as intended. However, CFPB did not examine how the provider had implemented these requirements as part of its initial review of this provider. Without effectively reviewing its service provider and following its own process of tracking risk mitigation items and activities, CFPB lacks assurance that it is fully safeguarding its information resources and making fully informed decisions related to managing risk and implementing risk mitigation controls. To better detect risks in consumer financial markets and improve federal oversight of consumer financial protection laws, CFPB has collected consumer financial data on products ranging from credit card accounts to payday loans. CFPB has used the consumer financial data it collects to inform required rulemakings, develop examination strategies, and issue congressionally mandated reports. Recognizing the sensitivity of some of the consumer financial data it has collected, CFPB has taken steps to protect and secure these data collections, including adopting high-level privacy and security policies and processes. For example, the agency created a data intake process that brings together staff with relevant expertise to consider the statutory, privacy, and information security implications of proposals to collect consumer financial data. Staff also described a process for anonymizing large-scale data collections that directly identify individuals. In addition, CFPB recently developed overarching policies on information governance and privacy impact assessments. However, CFPB staff said they were primarily focused on taking necessary actions to effectively carry out their mission during these early years of agency operations and as a result, a number of policies and processes were not fully documented or implemented, as required by federal internal control guidelines or outlined in NIST guidance. In particular: Lack of written procedures: CFPB lacks written procedures for its data intake process, including for evaluating whether statutory restrictions related to collecting personally identifiable financial information apply to large-scale data collections, documenting determinations of whether these collections are subject to PRA, and assessing and managing privacy risks of these collections. CFPB has not established written procedures for anonymizing data collections to help ensure staff take the appropriate steps each time or for monitoring and auditing privacy controls. In addition, CFPB did not consistently or comprehensively document its information security risk-assessment results. Developing written procedures with consistent, comprehensive documentation requirements would help provide CFPB with reasonable assurance that its collections comply with statutory requirements and that it will not place consumers’ privacy at risk. Incomplete implementation of privacy and security steps: CFPB has not yet developed a comprehensive privacy plan that brings together existing policies and guidance. It has not established a regular schedule of periodic reviews of its privacy program, or completed development of a role-based privacy training program. CFPB also did not capture all information security weaknesses identified in its remedial action plans or update the plans to include current planned dates for remediation based on priorities and available resources. In addition, CFPB did not comprehensively evaluate the service provider that processes consumer financial data on its behalf for compliance with contract provisions. Taking these actions will help strengthen CFPB’s privacy program and enhance its ability to identify, track, and mitigate security risks to consumer financial data stored on its systems. Insufficient efforts on PRA compliance: CFPB did not sufficiently document its consultation with OMB about the information-sharing agreement with OCC relating to the agencies’ separate credit card collections and the implications under PRA, and OMB staff told us it warranted further review. OCC also had not sought OMB approval for its credit card and mortgage data collections even though it now obtains data from more than nine entities for each of these collections. Obtaining further guidance from OMB on whether the information- sharing agreement requires CFPB and OCC to follow procedures outlined in PRA and getting OMB approval for OCC’s credit card and mortgage data collections would help both agencies ensure they fully comply with the law, do not unduly burden financial institutions, and maximize the practical utility of the information collected. To help improve CFPB’s efforts to protect and secure collected consumer financial data, we are making the following 11 recommendations to the Director of CFPB. To help ensure consistent implementation of its current processes and practices, the Director of CFPB should establish or enhance written procedures for: 1. the data intake process, including reviews of proposed data collections for compliance applicable legal requirements and restrictions and documentation requirements for consultations with OMB about PRA applicability; 2. anonymizing data, including how staff should assess data sensitivity, which steps to take to anonymize data fields, and responsibilities for reviews of anonymized data collections; 3. assessing and managing privacy risks, including documentation requirements to support statements about potential privacy risks in PIAs and for determinations that PIAs are not required; 4. monitoring and auditing privacy controls; and 5. documenting information security risk-assessment results consistently and comprehensively to include all NIST- recommended elements. To enhance the protection of collected consumer financial data, the Director of CFPB should fully implement the following five privacy and security steps: 1. develop a comprehensive written privacy plan that brings together existing privacy policies and guidance; 2. obtain periodic reviews of the privacy program’s practices as part of the independent audit of CFPB’s operations and budget; 3. develop and implement role-based privacy training; 4. update remedial plans for the information system that maintains consumer financial data and related components to include all identified weaknesses and realistic scheduled completion dates that reflect current priorities and available resources; and 5. include an evaluation of compliance with contract provisions relating to information security in CFPB’s review of the service provider that processes consumer financial data for CFPB. To provide greater assurance of compliance with PRA, the Director of CFPB should also consult further with OMB about whether PRA requirements apply to its credit card data collection and information- sharing agreement with OCC, and document the result of this consultation. We are also making a recommendation to the Comptroller of the Currency. To ensure compliance with federal law, the Comptroller of the Currency should seek timely approval from OMB under PRA for OCC’s credit card and mortgage collections, including the information-sharing agreement with CFPB for credit card data. We provided a draft of this report to CFPB, CFTC, the Consumer Product Safety Commission, FDIC, the Federal Reserve, FHFA, FTC, NCUA, OCC, OMB, SEC, and Treasury for review and comment. CFPB, OCC, and NCUA provided written comments that we reprinted in appendixes III, IV, and V, respectively. CFPB, FDIC, the Federal Reserve, FTC, OCC, and OMB provided technical comments that we incorporated, as appropriate. CFTC, the Consumer Product Safety Commission, FHFA, SEC, and Treasury did not provide comments. In written comments, CFPB concurred with our recommendations and noted that the report provides important information about the data CFPB uses to meet its statutory responsibilities and ways that CFPB can further enhance privacy and security safeguards. CFPB further noted that other federal prudential regulators collect similar amounts of consumer financial data and discussed ways in which CFPB has been working to reduce the burden or costs to financial institutions providing the data. CFPB outlined the actions the agency was taking or planned to take in response to our recommendations. For example, CFPB agreed to adopt formal procedures for its Data Intake Group for documenting its practices to ensure compliance with applicable requirements and consultations with OMB about PRA applicability. CFPB also noted it will develop written procedures for the de-identification of data containing personal identifiers. CFPB also agreed to review existing procedures related to its risk- assessment documentation for information security controls. Furthermore, CFPB said it has been developing a comprehensive written privacy plan, which will discuss how the agency will assess and manage privacy risks and monitor and audit privacy controls. CFPB also plans to develop additional role-based privacy training for its staff, review its remedial action plans to ensure appropriate details are documented and remediated on schedule, and review its information security risk- management process to further refine oversight of service providers. Finally, CFPB agreed to consult with OMB again about its credit card collection. In written comments, OCC also agreed with our recommendation to seek PRA approval from OMB for its credit card and mortgage collections. OCC noted that during the course of our review, OCC officials found that they were collecting data from more than 10 banks, which would require OMB approval under PRA. OCC noted that on September 5, 2014, it published a notice in the Federal Register about these collections and indicated that it planned to submit packages to OMB seeking the appropriate approval. In its written comments, NCUA noted the importance of safeguarding consumer financial data and the role of CFPB in consumer protection. We are sending copies of this report to the appropriate congressional committees, the Director of CFPB, the Chairman of CFTC, the Chairman of the U.S. Consumer Product Safety Commission, the Chairman of FDIC, the Chair of the Federal Reserve, the Director of FHFA, the Chairwoman of FTC, the Chairman of NCUA, the Comptroller of the Currency, the Director of OMB, the Chair of SEC, and the Secretary of the Treasury. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. The objectives of this report were to review (1) the Consumer Financial Protection Bureau’s (CFPB) consumer financial data collection efforts, including the authorities, scope, purposes, and uses of these collections, and the ways in which CFPB has collaborated with other federal financial regulators as part of these collections; (2) the extent to which CFPB complied with statutory restrictions on its consumer financial data collection authorities and federal privacy requirements; and (3) the extent to which CFPB has assessed the risks of these collections and applied appropriate information security protections over these data. To describe the authorities, scope, purposes, and uses of CFPB’s data collections, we reviewed relevant portions of the Dodd-Frank Wall Street Reform and Consumer Protection Act (Dodd-Frank Act), CFPB studies, regulations, and contracts with data aggregators. Our review focused on large-scale consumer financial data collections CFPB obtained under supervisory or market monitoring authorities, as well as voluntary requests, and not data collected related to consumer complaints or for investigative or enforcement purposes. We studied 12 large-scale CFPB data collections that included consumer financial data the agency was collecting or had collected during the period from January 1, 2012 through July 1, 2014. CFPB research staff identified and confirmed that these 12 data collections represented the extent of large-scale data collections from multiple institutions being studied by CFPB staff during this time period. We excluded any planned data collections or collections under development. We focused our analysis of CFPB’s data collections, studies, and examination materials on consumer financial data collections that occurred since January 2012, as CFPB had limited data collections before that time. We physically reviewed several of CFPB’s large-scale data collections on-site: consumer credit report information, credit cards, deposit advance products, overdraft fees, storefront payday loans, and private student loans. For the other large-scale collections we reviewed the data field names and descriptions: Mortgages (Corelogic contract), private-label mortgages (Blackbox Logic contract), automobile sales, online payday loans, credit scores, and arbitration case records. We did not assess the appropriateness of any individual fields for which CFPB is collecting data. We interviewed CFPB staff from the Research, Markets, and Regulation and Supervision, Fair Lending, and Enforcement teams as well as CFPB legal staff. To describe the data collections from other prudential regulators as well as any overlap or duplication of efforts, we reviewed relevant agency publications and interviewed officials and staff from the Board of Governors of the Federal Reserve System (Federal Reserve), Office of the Comptroller of the Currency (OCC), Federal Deposit Insurance Corporation, and the National Credit Union Administration. We also reviewed the Federal Reserve’s public notifications about its collections for conducting bank holding company stress tests (Y-14 collections), which included notices in the Federal Register about its credit card and mortgage data collections, and OCC’s documents related its credit card and mortgage data collections, including its contract with a data aggregator. We also reviewed the large scale consumer financial data collections of OCC, FDIC, and the Federal Reserve to determine whether they contained information that directly identifies individuals. We did not assess the privacy or information security controls of these collections for this report. We also discussed the extent to which other agencies with financial markets or consumer regulatory responsibilities also collect consumer information, including with staff from the Commodity Futures Trading Commission, the Consumer Product Safety Commission, the Federal Housing Finance Agency (FHFA), the Federal Trade Commission, and Securities and Exchange Commission. To describe the ways in which CFPB coordinates with other regulators on data collections, we reviewed memorandums of understanding and information-sharing agreements CFPB has with other federal regulators and interviewed regulatory staff. We focused our review on the CFPB’s information-sharing agreements for large-scale data collections: one with FHFA for the National Mortgage Database and another with OCC for the credit card database, and interviewed relevant staff at each of these agencies about the agreements and developments of the data sharing. To describe what is known about the costs and benefits of CFPB’s data collections, we reviewed CFPB contracts, reports, rulemakings, testimonies and responses to congressional questions. To learn more about financial institutions’ experiences providing consumer financial data and how these experiences compared with other prudential regulators, we interviewed representatives from 9 financial institutions. We randomly selected and interviewed eight institutions that are supervised by CFPB and that provide credit card account data to either CFPB or OCC on an ongoing basis. We also randomly selected and interviewed 1 additional financial institution to interview that was supervised by CFPB but does not provide credit card data on an ongoing basis. In addition to the interviews with representatives of these 9 institutions, we reviewed examination workpapers from 10 randomly selected institutions. We reviewed information requests and supervisory letters from 46 examinations at these institutions that were completed in 2012 and 2013. We also reviewed reports and interviewed staff from organizations that analyzed privacy issues, monitored consumer financial topics, and served as industry associations for financial institutions. To determine the extent to which CFPB complied with federal data collection requirements and privacy protections, we reviewed CFPB privacy policies and information-sharing protocols, training requirements, and public and nonpublic notices about collections involving personal or direct identifiers. We compared CFPB’s policies and practices against Dodd-Frank Act requirements, Office of Management and Budget (OMB) guidance, and recommendations of the National Institute for Standards and Technology (NIST). We reviewed publications by the White House, Federal Trade Commission, and other policy research organizations about the risks for re-identification associated with collecting anonymized consumer data. We interviewed CFPB’s Chief Information Officer, Chief Privacy Officer, and other data, research, and legal staff about their privacy-related policies, practices, and controls implemented. We discussed CFPB’s data collections with OMB staff who review federal collections and compliance with statutory requirements. We also spoke with consumer and privacy advocacy groups about their views on CFPB’s data collections and with an academic expert about the extent to which personal information can be de-anonymized. We met with CFPB staff who analyze these data and physically observed CFPB data collections on-site to understand the scope of the data collections and how CFPB treated any personally identifying information. We reviewed the data fields for each of the 12 databases under review. For 3 of the 12 databases, CFPB stores the data at the institution level. For these databases (payday lending, overdraft fees, and deposit advance products), we reviewed a sample of seven of the institutions that provided data. We took steps to ensure the accuracy of key information used in this report, including interviewing agency officials, obtaining original source documents, and physically observing database contents on-site when necessary. We determined the data were reliable for the purposes used in this report, specifically giving readers an estimate of the number of records in each dataset. To describe the adequacy and effectiveness of CFPB’s information security protections, we reviewed CFPB’s security policies and procedures for the information system and related components that store consumer financial data. We compared CFPB’s current information security policies and procedures against the NIST risk-assessment framework, which emphasizes the selection, implementation, and monitoring of security controls, and the authorization of information systems. We reviewed and analyzed documentation and policies related to CFPB’s system security plan, risk assessments, security assessment reports, and remedial action plans and compared each against applicable NIST and CFPB-defined standards. We evaluated the extent to which CFPB has established and implemented policies and procedures to ensure its service providers provide adequate security protections over the consumer financial data which is collected and maintained on its behalf. This was done by reviewing and analyzing documentation of CFPB’s evaluation of a service provider and comparing it against CFBP requirements. We reviewed prior audit work conducted by the Inspector General for the Federal Reserve and CFPB in this area. We interviewed the Chief Information Officer, Chief Information Security Officer, and other CFPB staff about these policies, procedures, and reports. We reviewed logical access controls for the information system component in which CFPB stores and analyzes consumer financial data. We reviewed and observed access control lists, authentication and account management, server administration and configuration, firewall technology, and vulnerability and compliance scanning. We also interviewed the CFPB staff who manage these servers about these controls. We conducted this performance audit from August 2013 to August 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Use of consumer financial data Loan-level origination information for all educational loans of nine major lenders for all loans that originated from 2005 through 2011. Analysis of the number of loan originations and their associated interest rates. To compare credit scores sold to creditors (lenders) and those sold to consumers by nationwide credit reporting agencies and determine whether differences between those scores disadvantage consumers. Random sample of 200,000 consumer credit reports from each of three nationwide credit reporting agencies. Zip code and age information allowed comparison of scores by consumer demographics. To understand the payday loan and deposit advance product market and present facts related to CFPB’s analysis of these markets. Sample of account- and loan- level data from five to nine storefront payday lenders and some depository institutions providing deposit advance products. To understand financial institutions’ overdraft programs for consumer checking accounts. Sample of consumer checking account and transaction data from a sample of large depository institutions. To review the consumer credit card market and the effect of the CARD Act on the cost and availability of credit and the adequacy of protections relating to credit card plans. Credit card account data provided by financial institutions submitting data to CFPB and OCC. To provide preliminary results on the use of pre-dispute arbitration contract provisions in consumer financial products or services. More than 1,000 arbitration case records from the American Arbitration Association from January 2010 through 2012. Sample of consumer credit records from CFPB’s Consumer Credit Panel. To describe patterns of consumer borrowing after the consumer obtains a payday loan and use of consumer loan products. Same sample of data described in the April 2013 report above on payday loans and deposit advances. To review the use of medical debt collections in credit scoring models. Sample of consumer credit files from CFPB’s Consumer Credit Panel. To discuss empirical research about whether data on remittance transfers can enhance the credit scores of consumers. Random sample of consumer records, provided by a large remittance transfer provider and matched with credit records, as well as a control sample of randomly selected credit records. The CARD Act originally directed the Federal Reserve to complete the biennial report, but the Dodd- Frank Act transferred this authority to CFPB. In addition to the contact named above, Cody Goebel (Assistant Director), Katherine Bittinger Eikel (Analyst-in-Charge), Edward R. Alexander, Jr., Rachel Batkins, Don Brown, William Chatlos, West Coile, Nathan Gottfried, Fatima Jahan, Anjalique Lawrence, Bryan Maculloch, Marc Molino, Patricia Moye, David Plocher, Barbara Roesmann, Maria Stattel, Anne Stevens, Shaunyce Wallace, and Heneng Yu made significant contributions to this report.
Congress created CFPB in 2010 as an independent agency to regulate the provision of consumer financial products and services, such as mortgages and student loans. CFPB has begun collecting consumer financial data from banks, credit unions, payday lenders, and other institutions. GAO was mandated to examine CFPB's collection of consumer financial data. This report addresses (1) the scope, purposes, uses, and authorities of CFPB consumer financial data collections and (2) CFPB's compliance with laws and federal requirements, including government-wide privacy and information security requirements. GAO reviewed laws, regulations, and contracts pertaining to CFPB's data collections; reviewed privacy and information security policies; reviewed inspector general reports on CFPB's information security program; assessed how CFPB applied NIST's framework for managing risks of storing data; examined access controls on the system maintaining consumer financial data; and interviewed CFPB and other regulatory officials, privacy experts, and representatives from randomly selected financial institutions. To carry out its statutory responsibilities, the Consumer Financial Protection Bureau (CFPB) has collected consumer financial data on credit card accounts, mortgage loans, and other products through one-time or ongoing collections. As the following table shows, these large-scale data collections varied from about 11,000 consumer arbitration case records from a trade association to 173 million mortgage loans from a data aggregator. Of the 12 large-scale collections GAO reviewed, 3 included information that identified individual consumers, but CFPB staff indicated that those 3 were not subject to statutory restrictions on collecting such information. Other regulators, such as the Board of Governors of the Federal Reserve System (Federal Reserve) and the Office of the Comptroller of the Currency (OCC), collect similarly large amounts of data. CFPB has taken steps to protect and secure these data collections. For example, it created a data intake process that brings together staff with relevant expertise to consider the statutory, privacy, and information security implications of proposed consumer financial data collections. CFPB staff described a process for anonymizing large-scale data collections that directly identify individuals. In addition, CFPB had taken steps to implement an information security program that is consistent with Federal Information Security Management Act requirements, according to the Office of Inspector General for the Federal Reserve and CFPB. GAO found that CFPB had implemented logical access controls for the information system that maintains the consumer financial data collections and was appropriately scanning for problems or vulnerabilities. CFPB also established a risk-management process for the information system that maintains consumer financial data consistent with guidelines developed by the National Institute of Standards and Technology (NIST). However, GAO determined that additional efforts are needed in several areas to reduce the risk of improper collection, use, or release of consumer financial data. Written procedures and documentation: CFPB lacks written procedures and comprehensive documentation for a number of processes, including data intake and information security risk assessments. The lack of written procedures could result in inconsistent application of the established practices. For example, CFPB unnecessarily retained sensitive data in two collections GAO reviewed, but its staff said they plan to remove this information. GAO recommends CFPB establish or enhance written procedures for (1) data intake, including reviews of proposed data collections for compliance with applicable legal requirements and restrictions; (2) anonymizing data; (3) assessing and managing privacy risks; and (4) monitoring and auditing privacy controls; and (5) documenting results of information security risk-assessments consistently and comprehensively. Implementation of privacy and security steps: CFPB has not yet fully implemented a number of privacy control steps and information security practices, which could hamper the agency's ability to identify and monitor privacy risks and protect consumer financial data. GAO recommends CFPB take or complete action to (1) develop a comprehensive written privacy plan that brings together existing privacy policies and guidance; (2) obtain periodic independent reviews of its privacy practices; (3) develop and implement targeted privacy training for staff responsible for working with sensitive personal information; (4) update remedial action plans to include all identified weaknesses and realistic planned remediation dates that reflect priorities and resources; and (5) include an evaluation of compliance with contract provisions relating to information security in CFPB's review of the service provider that processes consumer financial data on its behalf. Paperwork Reduction Act compliance: Under the Paperwork Reduction Act (PRA), agencies generally must obtain Office of Management and Budget (OMB) approval when collecting data from 10 or more entities to minimize burden and maximize the practical utility of the information collected. CFPB and OCC collect, on an ongoing basis, credit card data from different institutions—representing about 87 percent of outstanding credit card balances—and agreed to share data. However, OMB staff said the agencies' collections and data-sharing agreement may warrant OMB review and approval. Additional consultation with OMB regarding these collections and the data-sharing agreement would help both agencies ensure they are fully complying with the law. Furthermore, OCC had not obtained OMB approval for its credit card and mortgage data collections, which each included more than nine entities. Without approval, OCC lacks reasonable assurance that its collections comply with PRA requirements intended to reduce burden. GAO recommends (1) CFPB consult further with OMB about its credit card collection and data-sharing agreement, and (2) OCC seek OMB approval for its credit card and mortgage data collections. a CFPB has access to credit card data from additional credit card issuers through an information-sharing agreement with the Office of the Comptroller of the Currency, which collects more than 500 million total accounts on a monthly basis. When combined, these data contain information about 87 percent of outstanding credit card balances by volume as of March 2014. b CFPB removed information that directly identifies individuals from the files staff use to analyze these data. GAO makes 11 recommendations to enhance CFPB's privacy and information security and 1 recommendation to OCC to ensure its data collections comply with appropriate disclosure requirements. CFPB and OCC agreed with GAO's recommendations and noted steps they plan to take or have taken to address them.
To help provide a safe operating environment for airlines, the Code of Federal Regulations (C.F.R.) title 14, part 107 requires that U.S. airports control access to secured areas. Such controls are intended to ensure that only authorized persons have access to aircraft, the airfield, and certain airport facilities. Other security measures include requiring that airport and airline employees display identification badges and that airlines screen persons and carry-on baggage for weapons and explosives. In January 1989, FAA made 14 C.F.R. part 107 more stringent by mandating that access controls to the secured areas of certain airports meet four broad requirements. Under the amendment—14 C.F.R. 107.14—access control systems must (1) ensure that only authorized persons gain access to secured areas, (2) immediately deny access to persons whose authorization is revoked, (3) differentiate between persons with unlimited access to the secured area and persons with only partial access, and (4) be capable of limiting access by time and date. According to FAA, these requirements are intended to prevent individuals, such as former airline employees, from using forged, stolen, or noncurrent identification or their familiarity with airport procedures to gain unauthorized access to secured areas. All U.S. airports where airlines provide scheduled passenger service using aircraft with more than 60 seats must meet the requirements of 14 C.F.R. 107.14. Beginning in August 1989, each of these airports had to develop an access control system plan for FAA field security officials to review and approve. Following approval, FAA gives airports up to 2-1/2 years to comply with the regulation, depending on the number of persons screened annually or as designated by FAA on the basis of its security assessment. FAA expects airports to maintain and modernize their systems to keep them in regulatory compliance. As of August 1994, 258 airports were subject to FAA’s access control requirements. Appendix I lists these airports. Access control systems are eligible for AIP funds. FAA administers the AIP and provides funds for airport planning and development projects, including those enhancing capacity, safety, and security. FAA’s AIP Handbook (Order 5100.38A) provides policies, procedures, and guidance for making project funding decisions. According to the handbook’s section on safety, security, and support equipment (section 7), only those system components and facilities necessary to meet the requirements of 14 C.F.R. 107.14 are eligible for AIP funds. The airports themselves must fund any additional equipment or software capability that exceeds these requirements. FAA airport programming officials approve AIP funding requests. Airports have installed various systems—mostly computer-controlled—to meet FAA’s four access control requirements. With FAA’s approval, airports have taken the following approaches: Airports have placed the equipment for their access control systems in different locations. For example, some airports screen persons at checkpoints, while other airports have installed controls on doors beyond such checkpoints. Also, some airports have installed controls on both sides of doors leading into and out of secured areas. Airports have installed different types of equipment. For example, to secure doors and gates, several airports use magnetic stripe card readers while others use proximity card readers. One airport installed a reader that scans an individual’s hand to determine the person’s identity. Also, we visited one airport that has an “electronic fence” to segregate the commercial and general aviation operations areas; another has a guard gate and magnetic stripe card reader to separate passenger and cargo operations areas. Additionally, some airports have mounted closed-circuit television cameras at doors and gates, while other airports have chosen not to install such technology. According to FAA’s data, most of the 258 regulated airports have now completed installing their systems, but they will need to modernize these systems in the future. Modernization is necessary when equipment wears out, additional equipment is needed, or equipment or software no longer has the capacity to meet security-related demands. For example, in September 1994, FAA provided one airport that had an approved system with over $3 million in AIP funding to purchase closed-circuit television cameras, help construct a communications center, and make other system modifications to meet additional security needs. The costs for access control systems are over three times greater than FAA expected. FAA initially estimated that the costs to install, operate, maintain, and modernize systems at all regulated airports would total $211 millionfrom 1989 through 1998. However, updated data provided by FAA show that actual and projected costs for the same period totaled about $654 million. This amount includes $327 million in AIP funds, or 50 percent of total costs over the 10-year period. As of August 1994, 177 (69 percent) of the 258 regulated airports received AIP funding to help pay for their access control systems. Furthermore, on the basis of the updated information, FAA projects that costs for systems in 1999 through 2003 will total an additional $219 million, half of which would be federally funded. Appendix II shows actual and projected access control costs in 1989 through 2003, including AIP funding. According to FAA officials, FAA’s initial cost projection was low primarily because more access points were secured and more sophisticated and expensive equipment was installed than the agency’s analysis considered. For example, FAA’s analysis assumed that the largest airports would secure 128 access points on average. However, we found that these airports had initially secured about 390 points on average. Appendix III compares FAA’s initial cost figures with the agency’s updated actual and projected costs of access control systems. Over the next several years, many access control systems will need to be modernized. FAA can help ensure that modernization is implemented in a cost-effective manner by providing detailed guidance and facilitating the development of standards explaining how to meet the requirements of 14 C.F.R. 107.14. Without detailed guidance, many airports initially spent funds to secure access points that FAA later determined did not need to be secured to meet the agency’s requirements. Also, without standards to guide the design of systems, some airports purchased systems that did not meet FAA’s requirements. Additionally, without guidance and standards to serve as criteria, it was difficult for FAA to ensure that AIP funds were used only for the system components needed to meet the agency’s access control requirements as directed by its AIP funding policy. FAA and the industry have several initiatives under way that could address these deficiencies and help ensure that systems are cost-effective. FAA has not developed detailed guidance and standards to explain how systems could meet its four access control requirements in a cost-effective manner. Detailed guidance could help airports determine where equipment should be located. Standards could explain what functions equipment and software should perform and how quickly and reliably these functions should be done. For example, one of FAA’s four access control requirements is that systems grant secured-area access only to authorized persons. Detailed guidance for computer-controlled access control systems could include the following: Additional equipment beyond a card reader, such as lights that flash when the door is not secured, should be used only if the access point is in a low-traffic area. Closed-circuit television cameras should be used only at access points where an analysis shows that it is less expensive to have the camera than to have security personnel respond to an alarm. Standards for computer-controlled access control systems could include the period of time that a secured door or gate can remain open before security personnel are notified, the period of time that can elapse before a terminated employee’s access code is invalidated, the percentage of time that the system is expected to be operable, and the frequency at which the system can misread a card. Although developing guidance and standards for access control systems is a complex undertaking, FAA has provided airports and airlines with guidance and standards explaining how to meet other agency requirements that are similarly complex. For example, FAA has planning and design guidance explaining how terminals can be configured to accommodate the expected flow of passengers. The guidance recognizes that each airport has its own combination of individual characteristics that must be considered. FAA’s standards for equipment include those to design, construct, and test lift devices for mobility-impaired airline passengers and vehicles for aircraft rescue and fire fighting. Such standards do not specify what equipment airports should use, but rather how a vendor’s equipment should perform to meet FAA’s requirements. For software, FAA has developed standards for the software used in the Traffic Alert and Collision Avoidance System that it requires on most commercial passenger aircraft. FAA requires that airports use its guidance and standards in order to receive AIP funds. In some cases, FAA certifies that equipment and software from certain manufacturers meet its standards, as it has done for the equipment used to screen persons and the Traffic Alert and Collision Avoidance System. However, similar standards and certifications do not exist for access control systems. When FAA issued 14 C.F.R. 107.14 in January 1989, the agency did not conduct tests that could have provided the necessary knowledge to establish detailed guidance and standards for computer-controlled systems. Although airports and airlines suggested that FAA conduct tests at selected airports, the agency determined that nationwide implementation of the new requirements should proceed immediately. According to FAA officials, the Office of the Secretary of Transportation attached a very high priority to implementing improved airport access controls. As a result, FAA decided not to delay implementing the new access control requirements by testing and evaluating systems. According to security experts and airport and airline representatives,detailed guidance and standards would help airports know which systems satisfy FAA’s access control requirements in a cost-effective manner. Without detailed guidance and standards, it is difficult to determine if the many different systems installed at a wide range of costs are cost-effective. A November 1993 survey by the Airports Council International-North America of 63 airports (24 percent of all regulated airports) found that virtually no two have systems using the same equipment and software.Also, a November 1993 survey by the Airport Consultants Council of 14 airports found that the installation cost per secured access control point ranged from $6,250 to almost $55,000; the average cost was over $30,000. Without detailed guidance, many airports installed access controls that FAA had approved but later had determined were not needed to meet its requirements. In April 1992, citing concerns about escalating costs, FAA clarified how airports could configure systems. FAA allowed airports that had installed systems to reduce the number of controlled access points if the reduction did not compromise security. According to FAA data, over 120 airports have reduced their number of controlled access points. For example, one airport reduced its total number of controlled access points by 26 percent (106 points) while still meeting FAA’s requirements. Another airport now meets FAA’s requirements with screening checkpoints at concourse entrances, although its initial system included both the checkpoints and card readers installed on both sides of 114 doors located beyond the checkpoints. FAA’s Director of Civil Aviation Security Policy and Planning acknowledges that the agency must take a more proactive approach to ensure that airports meet access control requirements in a cost-effective manner by reducing the number of controlled access points where feasible without decreasing security. Similarly, without standards on which to base system design, airports have incurred higher costs for systems that are based on proprietary software and a “closed architecture.” Many airports contracted with firms to install, maintain, and modify their systems using proprietary software and a closed architecture. In such cases, only the vendor providing the system is familiar enough with the system to effectively maintain or make changes to it. According to security experts, the use of proprietary software and a closed architecture can increase a system’s lifecycle costs by as much as 100 percent, primarily because of higher maintenance and modification costs. These experts told us that appropriate standards could have provided for an access control system design based on an open architecture. An open architecture would have allowed different vendors to compete for system maintenance, thus decreasing costs. Also, according to security experts, standards would have reduced total system costs by allowing for economies of scale and easier incorporation of new technologies. Furthermore, without standards on which to base system design, some airports purchased systems that did not meet FAA’s requirements. When FAA issued 14 C.F.R. 107.14, airports looked to firms that had developed and installed access control systems at locations such as military facilities, prisons, hospitals, office buildings, and homes. According to security experts, in many cases it was difficult to transfer the security technology and operational knowledge used for such systems to the airport environment. The November 1993 survey by the Airport Consultants Council found that 21 major airports incurred costs to replace or significantly modify systems that did not operate adequately to meet FAA’s requirements. For example, one such airport had to replace its inadequate system, including card readers, at a cost of over $1.5 million. According to security experts, well-defined standards could have guided vendors in developing systems and provided airports with greater assurance that the systems would meet FAA’s access control requirements. Also, standards could have provided a basis for FAA to certify a vendor’s system. Finally, detailed guidance and standards could have provided criteria for FAA to use in evaluating airports’ AIP funding requests for access control systems. Generally, FAA airport programming officials worked with FAA security officials to determine if AIP funding would be used only for the system components needed to meet FAA’s requirements as directed by the agency’s AIP Handbook. However, they both lacked well-defined criteria against which proposed access control systems could be compared and evaluated. This problem continues as airports request AIP funds to help modernize their systems. For example, one airport with an approved system requested $1.2 million in AIP funds to secure additional doors. An FAA regional Special Agent for security told us that the lack of criteria has caused her to be unsure how to determine if this funding request should be approved. In January 1994, FAA requested that the public identify up to three regulations that should be amended or eliminated to reduce undue regulatory burdens. Both airports and airlines identified 14 C.F.R. 107.14 as one of the most costly and burdensome regulations imposed on them and stated that FAA should reassess how to control access in a more cost-effective manner without decreasing security. FAA’s December 1994 response cites ongoing efforts to revise its security regulations and work with the industry to set standards for access control systems. FAA and the industry have three initiatives under way for considering changes to access control that could help ensure that systems are cost-effective. First, FAA is working with the industry to revise airport and airline regulations, including 14 C.F.R. 107.14. Specifically, FAA is reviewing its four access control requirements to determine how they help meet security needs as part of an overall security strategy. FAA plans to issue a Notice of Proposed Rulemaking on any revisions to its security regulations by mid-1995. Second, through the Aviation Security Advisory Committee, FAA is working with the industry to consider the feasibility of implementing a system that would allow transient employees, such as pilots and flight attendants, to use a single card to gain access at all major airports—a universal access system. Research on and testing of a universal access system is one method to help develop standards for access control technology. The Congress has directed that $2 million of FAA’s fiscal year 1994 appropriation be used for the initial costs to develop and implement a universal access system. FAA and the industry are now working to evaluate how such a system could best be implemented. Tests involving three major airlines and two high-security airports are scheduled to begin in March 1995. Third, FAA is facilitating an ongoing effort with the industry to develop standards for systems that would comply with the requirements of 14 C.F.R. 107.14 and meet the needs of all regulated airports. As of December 1994, this effort includes developing standards for how equipment and software should function to meet requirements. FAA and the industry also plan to (1) incorporate knowledge gained from testing the universal access system, (2) identify near-term approaches to make systems easier to maintain and equipment and software easier to modify, and (3) promote modernizing existing systems to the new standards. This effort is scheduled to be completed by October 1995. Airport and airline security is of paramount importance. To this end, FAA and the industry plan to spend millions of dollars to modernize access control systems as part of an overall security strategy. At this time, however, FAA cannot ensure that these modernization efforts will result in the best use of limited federal and industry funds. FAA and the industry have initiatives under way that provide a basis for helping to ensure that access control systems are cost-effective. Specifically, following 5 years of experience with installing and using systems, both FAA and the industry are in a good position to complete their current effort to review overall aviation security needs as they relate to access control requirements and to change the requirements if necessary. As a next step, FAA and the industry can complete their ongoing work to develop and implement standards explaining how equipment and software should function to meet access control requirements. In addition to ongoing initiatives, FAA can help ensure that systems are cost-effective by developing and implementing detailed guidelines explaining where system equipment should be placed. FAA officials can use the detailed guidance and standards as criteria to evaluate AIP funding requests and help ensure that these funds are used only for the system components needed to meet access control requirements. To help ensure that systems are cost-effective, we recommend that the Secretary of Transportation direct the Administrator, FAA, to develop and implement detailed guidance based on the agency’s access control requirements that explains where system equipment should be located. FAA should incorporate these guidelines and the standards being developed into its review process for Airport Improvement Program funding requests. We discussed our findings and recommendations with FAA’s Assistant Administrator for Civil Aviation Security; Director of Civil Aviation Security Policy and Planning; Director of Civil Aviation Security Operations; Manager, Programming Branch, Airports Financial Assistance Division; and other Department of Transportation officials. These officials provided us with clarifying information, and we revised the text as necessary. FAA officials were concerned that our statement that systems cost more than FAA initially had anticipated implies that the systems and the components used in them should have been less costly. We explained that our purpose is to present factual information on the different systems airports installed and that without detailed guidance and standards, it is difficult to determine if systems should have been less costly. FAA officials also stated their concern that achieving cost-effective systems means using the least expensive equipment. We stated that this is not our position and that systems may be cost-effective using equipment that is more expensive in the short term but lasts longer and performs better, resulting in less cost over time. FAA officials also expressed concern that using standards to assist in making AIP funding decisions would limit the agency’s ability to accommodate security needs at individual airports. In our view, the standards would provide a baseline from which to begin evaluating funding requests and would not prohibit FAA from taking into account the access control needs of individual airports. Furthermore, FAA and the industry plan to develop standards that will accommodate the needs of all airports subject to access control requirements. Therefore, we believe that standards could allow for airport-by-airport decisions while still providing a tool to help ensure that systems are cost-effective. Finally, FAA officials noted that the appropriate use of access control systems by airport and airline employees is a critical factor in ensuring that such systems are effective. We concur with this position. We performed our review between October 1993 and January 1995 in accordance with generally accepted government auditing standards. All dollar amounts in this report have been adjusted to constant 1993 dollars. Additional details on our scope and methodology are contained in appendix IV. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 10 days after the date of this letter. At that time, we will send copies of this report to appropriate congressional committees; the Secretary of Transportation; the Administrator, FAA; the Director, Office of Management and Budget; and other interested parties. We will make copies available to others on request. This report was prepared under the direction of Allen Li, Associate Director, who may be reached at (202) 512-3600. Other major contributors are listed in appendix V. To address our objectives, we performed work at FAA headquarters in Washington, D.C. We also met with officials at FAA’s Central Region in Kansas City, Missouri; its Northwest Mountain Region in Seattle, Washington; Southern Region in Atlanta, Georgia; and Western-Pacific Region in Los Angeles and San Francisco, California. We visited 17 airports of varying size throughout the country. We interviewed executives and former executives of aviation industry associations, including those representing the interests of airports, airlines, and pilots. We attended a major conference in Nashville, Tennessee, at which we communicated our understanding of access control issues and sought the knowledge of airport managers. We attended meetings of the Aviation Security Advisory Committee; the Committee’s Universal Access System subgroup; and RTCA, Incorporated Special Committee 183. We conferred privately with these groups’ members, which included senior FAA officials, aviation industry representatives, and system experts. At our request, FAA surveyed all 258 regulated airports to gather detailed data on the costs that airports and airlines have incurred to date and on costs that they anticipate incurring through the year 2003 for access control systems. We worked closely with FAA during all phases of its survey to understand the validity of the information. Finally, we reviewed the agency’s regulations, policies, and procedures governing access control systems. Randall B. Williamson, Assistant Director Lisa C. Dobson Dana E. Greenberg The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on the Federal Aviation Administration's (FAA) access control systems, focusing on the: (1) cost of FAA access control systems; and (2) actions FAA could take to ensure that access control systems are cost-effective in the future. GAO found that: (1) FAA greatly underestimated the costs of its access control systems, due to the installation of more expensive equipment; (2) in many airports, FAA approved the installation of equipment in areas that did not need to be secured; (3) 21 major airports had to replace or significantly modify access control systems that did not meet FAA requirements; (4) FAA officials have been unable to ensure that Airport Improvement Program funds have been used only for those system components necessary to meet FAA access control requirements; and (5) FAA could help ensure that access control systems are cost-effective by providing detailed guidance on how systems should function to meet access control requirements.
One of DOD’s guiding principles for military compensation is that servicemembers, in both the reserve and active components, be treated fairly. Military compensation for reservists is affected by the type of military duty performed. In peacetime—when a reservist is training or performing military duty not related to a contingency operation—certain thresholds are imposed at particular points in service before a reservist is eligible to receive the same compensation as a member of the active component. For example, a reservist is not entitled to a housing allowance when on inactive duty training (weekend drills). If a reservist is on active duty orders that specify a period of 140 days or more, then he or she becomes entitled to the full basic housing allowance. For contingency operations, these thresholds do not apply. Thus, reservists activated for Operation Iraqi Freedom and other contingencies are eligible to receive the same compensation as active component personnel. Basic military compensation, in constant dollars, remained fairly steady during the 1990s but has increased in recent years. As a result, reservists—enlisted personnel and officers—activated today are earning more in the military than they did just a few years ago, as shown in figure 1. For example, an enlisted member in pay grade E-4 who is married with no other dependents (family size 2) earned $3,156 per month in basic military compensation in fiscal year 2003, compared with $2,656 per month in fiscal year 1999, or a 19 percent increase. These figures are calculated in constant 2003 dollars to account for the effects of inflation. In addition to increases in basic military compensation, other pay policies and protections may help to mitigate reservists’ financial hardship during deployment. For example: By statute, debt interest rates are capped at 6 percent annually for debts incurred prior to activation. The Servicemembers Civil Relief Act, passed in December 2003, enhanced certain other protections. For example, the act prohibits a landlord, except by court order, from evicting a servicemember or the dependents of a servicemember, during a period of military service of the servicemember, from a residence for which the monthly rent does not exceed $2,400. The act increased the monthly rental limit from $1,200 and required the rental limit to be adjusted annually based on changes to a national housing consumer price index. Some or all of the income that servicemembers earn while serving in combat zones is tax-free. For certain contingencies, including Operation Iraqi Freedom, DOD authorizes reservists to receive both a housing allowance and per diem for their entire period of activation, up to 2 years. Emergency loans are available through the Small Business Administration to help small businesses meet necessary operating expenses and debt payments. An issue of concern that is closely tied with military compensation is income loss experienced by many reservists activated for a military operation. In a recent report, we evaluated information on income change. We found that DOD lacked sufficient information on the magnitude, the causes, and the effects of income change to determine the need for compensation programs targeting reservists who meet three criteria: (1) fill critical wartime specialties, (2) experience high degrees of income loss when on extended periods of active duty, and (3) demonstrate that income loss is a significant factor in their retention decisions. Such data are critical for assessing the full nature and scope of income change problems and in developing cost-effective solutions. DOD data on income change has been derived from self-reported survey data collected from reservists and their spouses. A 2000 DOD survey of reservists showed that of those who served in military operations from 1991 to 2000, an estimated 59 percent of drilling unit members had no change or gain in family income when they were mobilized or deployed for a military operation, and about 41 percent lost income. This survey was conducted before the mobilizations occurring after September 11, 2001. A 2002 DOD survey of spouses of activated reservists showed that an estimated 70 percent of families experienced a gain or no change in monthly income and 30 percent experienced a decrease in monthly income. The survey data are questionable primarily because it is unclear what survey respondents considered as income loss or gain in determining their financial status. We recommended that DOD take steps to obtain more complete information in order to take a targeted approach to addressing income change problems. DOD concurred with this recommendation. In May and September of 2003, DOD implemented two web-based surveys of reservists to collect data on mobilization issues, such as income change. DOD has tabulated the survey results and expects to issue a report with its analysis of the results by July 2004. These surveys should be insightful for this issue. Benefits are another important component of military compensation for reservists and help to alleviate some of the hardships of military life. DOD offers a wide range of benefits, including such core benefits as health care, paid time off, life insurance, and retirement. Notable improvements have been made to the health care benefits for reservists and their families. For example, under authorities granted to DOD in the National Defense Authorization Acts for fiscal years 2000 and 2001, DOD instituted several health care demonstration programs to provide financial assistance to reservists and family members. For example, DOD instituted the TRICARE Reserve Component Family Member Demonstration Project for family members of reservists mobilized for Operations Noble Eagle and Enduring Freedom to reduce TRICARE costs and assist dependents of reservists in maintaining relationships with their current health care providers. The demonstration project eliminates the TRICARE deductible and the requirement that dependents obtain statements saying that inpatient care is not available at a military treatment facility before they can obtain nonemergency treatment from a civilian hospital. Legislation passed in December 2002 made family members of reservists activated for more than 30 days eligible for TRICARE Prime if they reside more than 50 miles, or an hour’s driving time, from a military treatment facility. Last year, the Congress passed legislation for a 1-year program to extend TRICARE to reservists who are unemployed or whose employer does not offer health care benefits. As we have previously reported, given the federal government’s growing deficits, it is critical that the Congress give adequate consideration to the longer term costs and implications of legislative proposals to further enhance military pay and benefits before they are enacted into law. For example, proposals to enhance reserve retirement should be considered in this context. We have ongoing work looking at proposals to change the reserve retirement system. The key questions we are addressing include: What are the objectives of the reserve retirement system? Is DOD meeting its reserve retirement objectives? What changes to the current reserve retirement system that DOD and others have proposed could help DOD better meet its objectives? What factors should DOD consider before making changes to its reserve retirement system? We anticipate issuing a report addressing these questions in September 2004. While we have not specifically reviewed the use of reenlistment bonuses for reservists, our work has shown that DOD could improve the management and oversight of the SRB program with more methodologically rigorous evaluations. The SRB program is intended to help the services retain enlisted personnel in critical occupational specialties, such as linguists and information technology specialists. Concerned about missing their overall retention goals in the late 1990s, all the services expanded their use of SRBs to help retain more active duty enlisted personnel. There were increases in the number of specialties that the services made eligible for the bonuses and in the number of bonus recipients. The Air Force, for example, awarded bonuses to 158 specialties (80 percent of total specialties) in fiscal year 2001, up from 68 specialties (35 percent of total specialties) in fiscal year 1997. During this time period, the number of active duty Air Force reenlistees receiving bonuses increased from 3,612 (8 percent of total reenlistees) to 17,336 (42 percent of total reenlistees). As a result of the services’ expanded use of SRBs for active duty personnel, the cost of the program more than doubled—from $308 million in fiscal year 1997 to $791 million in fiscal year 2002. The SRB budget was expected to rise to over $800 million in fiscal year 2005. About 44 percent of the SRB budget growth over the 1997 to 2005 period is attributable to increases in the Air Force SRB budget. Despite increased use of the SRB program, DOD has cited continued retention problems in specialized occupations such as air traffic controller, linguist, and information technology specialist. In November 2003, we reviewed a congressionally directed DOD report to the Congress on the program and found that DOD had not thoroughly addressed four of the five concerns raised by the Congress. As a result, the Congress did not have sufficient information to determine if the program was being managed effectively and efficiently. More specifically, DOD did not directly address the SRB program’s effectiveness or efficiency in correcting shortfalls in critical occupations. DOD had not issued replacement program guidance for ensuring that the program targets only critical specialties that impact readiness. DOD did not address an important change—the potential elimination of the requirement for conducting annual reviews. We were told that the new guidance will require periodic reviews, but neither the frequency nor the details of how these reviews would be conducted was explained. DOD did not describe the steps it would take to match program execution with appropriated funding. Our analysis showed that in fiscal years 1999- 2002, the services spent a combined total of $259 million more than the Congress appropriated for the SRB program. DOD provided only a limited assessment of how each service administers its SRB program. DOD identified the most salient advantages and disadvantages that could result from implementing a lump sum payment option for paying retention bonuses, and we generally concurred with DOD’s observations. On the basis of our work, we recommended that the Secretary of Defense direct the Under Secretary of Defense for Personnel and Readiness to (1) retain the requirement for an annual review of the SRB program and (2) develop a consistent set of methodologically sound procedures and metrics for reviewing the effectiveness and efficiency of all aspects of each service’s SRB program administration. DOD concurred with the recommendations but has not yet taken actions to address them. Mail can be a morale booster for troops fighting overseas and for their families at home. During Operation Iraqi Freedom, problems with prompt and reliable mail delivery surfaced early in the conflict and continued throughout. More than 65 million pounds of letters and parcels were delivered to troops serving in theater during 2003. Between February and November 2003, the Congress and the White House forwarded more than 300 inquiries about mail delivery problems to military postal officials. We are reviewing mail delivery to troops stationed overseas and plan to issue our report next month. In the report, we will assess (1) the timeliness of mail delivery to troops stationed in the Gulf Region, (2) how mail delivery issues and problems experienced during Operation Iraqi Freedom compare to those during Operations Desert Shield/Storm, and (3) efforts to identify actions to resolve problems for future contingencies. The timeliness of the mail delivery to troops serving in Operation Iraqi Freedom cannot be accurately determined because DOD does not have a reliable, accurate system in place to measure timeliness. Transit time data reported by the Transit Time Information Standard System for Military Mail shows that average transit times for letters and parcels into the theater consistently fell within the 11 to 14-day range—well within the current wartime standard of 12 to 18 days. However, we determined that the method used to calculate these averages masks the actual times by using weighted averages that result in a significant understating of transit times. A second source of data—test letters that were sent to individual servicemembers at military post offices by the Military Postal Service Agency between February and September 2003—indicate that mail delivery, on average, met the wartime standard during all but 1 month. However, we found that a significant number of test letters were never returned, and that test letters do not accurately measure transit time to the individual servicemember because they are sent only to individuals located at military post offices. It could take several more days for mail to get to forward-deployed troops. Even though the data shows otherwise, military postal officials acknowledge that mail delivery to troops serving in Operation Iraqi Freedom was not timely. Despite differences in operational theaters and an effort by postal planners to incorporate Operations Desert Shield/Storm experiences into the planning for Operation Iraqi Freedom, many of the same problems were encountered. These problems include (1) difficulty in conducting joint-service mail operations; (2) postal personnel inadequately trained and initially scarce in number due to late deployments; and (3) inadequate postal facilities, material handling equipment, and transportation assets to handle the initial mail surge. U.S. Central Command—the combatant command for Operation Iraqi Freedom—created an operations plan for joint mail delivery, but some of the planning assumptions were flawed and the plan was not fully implemented. This plan included certain assumptions that were key to its success, but some assumptions produced unforeseen negative consequences and others were not implemented or unrealistic. For example, the elimination of mail addressed to “Any Service Member” increased the number of parcels because senders found ways around the restriction. In addition, plans to restrict the size and weight of letters and parcels until adequate postal facilities had been established were never enacted; and the volume of mail was grossly underestimated. The plan also directed that a Joint Postal Center comprised of postal officials from all services manage and coordinate joint postal operations in theater. However, this effort was never fully implemented, and joint mail delivery suffered as a result. The Military Postal Service Agency did implement one strategy that proved to be successful as a result of lessons learned from Operations Desert Shield/Storm. Dedicated contractor airlift of mail into the contingency area was employed, avoiding the necessity of competing for military air cargo capacity, which greatly improved the regularity of mail service to the theater. No single entity has been officially tasked to resolve the long-standing postal problems seen again during Operation Iraqi Freedom. Military postal officials have begun to identify solutions to some of these issues. However, despite early efforts made by the Military Postal Service Agency to consolidate problems and identify solutions, this agency does not have the authority to ensure that these problems are jointly addressed and resolved before the next military contingency. During our meetings with dozens of key military postal officials serving during Operation Iraqi Freedom, we collected memoranda, after action reports, and their comments regarding the postal issues and problems that should be addressed to avoid a repetition of the same postal problems in future contingencies. These issues include: improving joint postal planning and ensuring joint execution of that plan; early deployment of postal troops; preparing updated tables of organization and equipment for postal units; improving peacetime training for postal units; and reviewing the command and control of postal units in a joint theater. The Military Postal Service Agency hosted a joint postal conference in October 2003 to discuss postal problems with dozens of key postal participants in Operation Iraqi Freedom and is currently in the process of consolidating these issues into a single document with the intent of developing plans to resolve the issues. In addition, the service components and the Military Postal Service Agency have taken some initial steps in employing alternative mail delivery and tracking systems. In our report, we plan to make several recommendations aimed at (1) establishing a system that will accurately track, calculate, and report postal transit times and (2) designating responsibility and providing sufficient authority within the Department to address and fix long-standing postal problems identified in this report. Mr. Chairman, this completes our prepared statement. We would be happy to respond to any questions you or other members of the Subcommittee may have at this time. For future questions about this statement, please contact Derek B. Stewart at (202) 512-5559 (e-mail address: [email protected]) or Brenda S. Farrell at (202) 512-3604 (e-mail address: [email protected]). Also making a significant contribution to this statement was Thomas W. Gosling. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Since the terrorist attacks of September 11, 2001, the U.S. military has deployed high numbers of active duty and reserve troops to fight the global war on terrorism and for Operation Iraqi Freedom. Ensuring that U.S. military forces are adequately compensated and that the morale of deployed troops remains high have been priorities for the Congress and the Department of Defense (DOD). In response to congressional mandates, GAO has reviewed a number of issues concerning military personnel. For this hearing, GAO was asked to provide the results of its work on military compensation for National Guard and Reserve personnel and on the Selective Reenlistment Bonus (SRB) program, a tool DOD can use to enhance retention of military personnel in critical occupational specialties. In addition, GAO was asked to provide its preliminary views, based on ongoing work, concerning mail delivery to troops stationed in the Middle East. Reservists who are called to active duty to support a contingency operation are eligible to receive the same pay and benefits as members of the active component. Moreover, in constant dollars, basic military compensation has increased in recent years. For instance, an enlisted reservist in pay grade E-4 who is married with no other dependents and who is called to active duty experienced a 19 percent increase in basic military compensation between fiscal years 1999 and 2003. Despite these increases, income loss is a concern to many reservists, although DOD has lacked timely, sufficient information to assess the full scope and nature of this problem. Benefits for reserve personnel have also improved, notably in the area of health care. As GAO has previously reported, given the federal government's growing deficits, it is critical that the Congress give adequate consideration to the longer term costs and implications of legislative proposals to further enhance military pay and benefits before they are enacted into law. For example, proposals to enhance reserve retirement should be considered in this context. Although GAO has not specifically reviewed the use of SRBs to enhance reserve retention, GAO has noted shortcomings in DOD's management and oversight of the SRB program for active duty personnel. GAO's observations of this program may be helpful in making decisions for the use of SRBs for reservists. Concerned about missing their overall retention goals in the late 1990s, all the services expanded their use of SRBs to help retain more active duty enlisted personnel in a broader range of military specialties, even though the program was intended to help the services meet retention problems in selected critical specialties. As a result, the cost of the program more than doubled in just 5 years--from $308 million in fiscal year 1997 to $791 million in fiscal year 2002. However, the effectiveness and efficiency of SRBs in targeting bonuses to improve retention in selected critical occupations is unknown. DOD has not conducted a rigorous review of the SRB program. DOD concurred with GAO's recommendations to institute more effective controls to assess the progress of the SRB program, but has not taken action as yet. Mail can be a morale booster for troops fighting overseas and for their families at home. GAO has been reviewing mail delivery to deployed troops and expects to issue a report soon. GAO's preliminary findings show that mail delivery continues to be hampered by many of the same problems encountered during the first Gulf War. First, DOD does not have a reliable accurate system in place to measure timeliness. Second, despite differences in operational theaters and efforts by DOD postal planners to incorporate lessons learned into planning for Operation Iraqi Freedom, postal operations faced many of the same problems, such as inadequate postal facilities, equipment, and transportation. Third, DOD has not officially tasked any entity to resolve the long-standing postal problems experienced during contingency operations. GAO plans to make several recommendations to improve DOD's mail delivery to deployed troops.
VA’s three operational administrations—the Veterans Health Administration (VHA), the Veterans Benefits Administration (VBA), and the National Cemetery Administration (NCA)—each manage their own separate regional network of facilities to provide program services to veterans and their families. These services include a diverse array of educational, disability, survivor, and health benefits. VHA holds the nation’s largest integrated health care system consisting of, among other things, medical centers and community-based outpatient clinics that are decentralized across 21 Veterans Integrated Service Networks. In addition to its primary program mission area of providing health care to veterans, VHA—specifically—is also responsible for managing the majority of the department’s underutilized and excess property and land-use agreements at the local level. It should be noted that the management of underutilized and vacant spaces can be costly. Decision making on how to use these properties may involve competing considerations such as budgetary constraints, legal limitations, and stakeholder input. VA utilizes various leasing authorities as a means of avoiding or decreasing its costs by maximizing available resources through the joint use of facility space. VA may use these authorities to enter into agreements that include outleases, licenses, permits, sharing agreements, and EUL agreements with public or private entities to use land and buildings for revenue or in-kind consideration. See table 1 below for the various types of authorities available to VA, a brief description of the authority, and how proceeds may be used if revenue is generated. Depending upon the terms specified and the type of agreement, agreements may generate revenue, in-kind considerations (such as cost savings or avoidance, or enhanced services), or both for the benefit of veterans, VA’s operations, or the community at large. When veterans benefit directly from these agreements, they may enjoy access to an expanded range of services that would otherwise not be available on VA medical center campuses because in some cases VA is not authorized to provide such services itself. VA benefits from land-use agreements by offsetting or avoiding altogether the costs associated with operating and maintaining underutilized or vacant properties. Finally, local communities may also benefit from agreements through the provision of services such as credit unions, daycare, or the placement of rooftop antennas to strengthen cell-phone reception. Details about land-use agreements, including estimated revenue and indications of in-kind considerations, are to be recorded in VA’s CAI system by administration, network, or medical facility personnel who are responsible for those agreements. According to VA, it uses this system to evaluate property management by its administrations, regional networks, and medical centers. The inventory data from this system are to form the basis for decision making used in VA’s strategic capital-investment planning processes. CAI data are also reported to external stakeholders, including Congress and GAO. To be entered into CAI, each land-use agreement must have its own revenue source and accounting codes. VA headquarters staff process requests and register the land-use agreements with these codes. Once assigned, the codes are to be entered into the CAI database. VA medical centers in VA’s 21 service networks are then responsible for updating the land-use agreement information into CAI immediately after they are notified that the codes have been entered as well as updating CAI at the time of execution of the land-use agreement. VA medical centers are also required to immediately update the CAI database for any subsequent changes in the land-use agreements. Each year, VA headquarters staff initiate a call for the VA medical centers to review existing data in CAI, including land-use agreements; update any needed changes to CAI; and certify the data are complete and accurate at that point in time. Enhanced use leases (EUL) are centrally managed at headquarters by the Office of Asset Enterprise Management (OAEM). OAEM is responsible for administering and managing the EUL program, with support from local facility staff from VA’s administrations. This monitoring includes tracking lease requirements and identifying benefits and expenses for EUL projects, once leases are executed. VA is also responsible for producing an annual consideration report to Congress for EULs that includes information on revenue, cost avoidance, cost savings, enhanced services, and expenses paid by VA. Unlike the central management of revenues and agreements associated with EULs, VA generally uses a decentralized approach in the monitoring of sharing agreements, outleases, licenses, and permits. The scope of projects can be diverse, ranging from space for medical research, day care, and rooftop telecommunications equipment, to 1-day special events for community causes. Regardless of the type of project, roles and responsibilities may vary by medical facility when overseeing land-use agreements. The monitoring of agreements, including ensuring the space is properly maintained or occupied, may involve offices responsible for asset management or contracting, for instance. According to VA officials, the medical centers are allowed considerable discretion in their management of these agreements. Based on our review of land-use agreement data for fiscal year 2012, VA does not maintain reliable data on the total number of land-use agreements and VA did not accurately estimate the revenues those agreements generate.provided to us from VA’s CAI system, VA reported that it had over 400 land-use agreements with over $24.8 million in estimated revenues for fiscal year 2012. However, in the course of our testing the reliability of the data, one of VA’s administrations—VHA—initiated steps to verify the accuracy and validity of the data it originally provided to us. During this verification process, VHA made several corrections to the data that raised questions about their accuracy, validity, and completeness. Examples of these corrections include the following: According to the land-use agreement data VHA reported multiple entries for a single land-use agreement. Specifically, VHA had 37 separate land-use entries for the same agreement entered in CAI—one for each building listed in the agreement—that were, in fact, for only one agreement at VA’s facility in Perry Point, Maryland. VHA also noted in its revisions that there were 13 agreements that had been terminated prior to fiscal year 2012 that should have been removed from the system. At the three VA medical centers we reviewed, we also found examples of errors in the land-use agreement data. Examples of these errors include the following: VHA did not include 17 land-use agreements for the medical centers in New York and North Chicago, collectively. VHA initially reported that it had 9 non-EUL land-use agreements that generated about $3.2 million in revenues at its North Chicago medical center in fiscal year 2012. In its revisions, VHA stated that its North Chicago medical center maintained 7 land-use agreements that generated no revenue, instead of 9 agreements that generated revenues. However, on the basis of our independent review of revenue receipts, we found that 5 agreements generated more than $240,000 in revenue in fiscal year 2012. For the medical center in West Los Angeles, VHA revised its estimated revenues from all land-use agreements in fiscal year 2012 from about $700,000 to over $810,000. However, our review of VA’s land-use agreements at this medical center indicated that the amount that should have been generated was approximately $1.5 million. Guidance in this area states that reliable data can be characterized as being accurate, valid, and complete.reasonably complete and accurate, can be used for their intended purposes, and have not been subject to inappropriate alteration. Additionally, data in systems should also be consistent—a subset of accuracy—and valid. Consistency can be impaired when data are entered at multiple sites and there is an inconsistent interpretation of what data should be entered. Finally, data that are valid actually represent what is Reliable data means that data are being measured. Thus, despite the corrections made by VHA, we cannot conclude that the revised number of land-use agreements held by VA or the amount of revenue these agreements generated in fiscal year 2012 are sufficiently reliable. VA policy requires that CAI be updated quarterly until the agreement ends. VA’s approach on maintaining the data in CAI relies heavily on data being entered timely (quarterly) and accurately by a staff person in the local medical center. VA OAEM makes annual requests to medical centers to update the data in CAI, which also calls for the medical centers staff to verify the data. VA officials stated that a number of deficiencies remained after an annual update of the data in CAI. According to VA officials, the errors may be a result of manual data entry or medical centers not adhering to the guidance for updating CAI on a quarterly basis. Our review found that VA does not currently have a mechanism to ensure that the data in CAI are updated quarterly as required and that the data are accurate, valid, and complete. Federal internal control standards state that relevant, reliable, and timely information is to be available for external reporting purposes and management decision making. Additionally, these standards also state that management should put in place control mechanisms and activities to enable it to enforce its directives and achieve results, such as providing relevant, reliable, and timely information. VA officials recognize the importance of maintaining quality data. According to VA’s guidance on CAI, the maintenance of high-quality data is critical to the organization’s credibility and is an indication of VA’s commitment to responsible capital asset portfolio management. Additionally, VA contends that high-quality data are needed to be responsive to policymakers and others. Officials at the VA headquarters reported that they undertake a few activities throughout the year to improve their data, such as the annual update. For example, an official told us that staff at headquarters had recently deployed a training session in 2014 that focused on updating data in CAI. According to a VA official, six sessions have been provided through June 2014. While these activities are positive steps, they do not provide the assurance needed that the data maintained in CAI are reliable. By implementing a mechanism that will allow it to assess whether medical centers have timely entered the appropriate land-use agreement data into CAI, and working with the medical centers to correct the data, as needed, VA would be better positioned to reliably account for land-use agreements and the associated revenues that they generate. At the three medical centers we visited, we found weaknesses with the billing and collection processes that impair VA’s ability to effectively and timely collect land-use agreement revenues from its sharing partners. Specifically, we found inadequate billing practices at all three medical centers we visited, as well as opportunities for improved collaboration at two of the three medical centers, and duties that were not properly segregated at one medical center. Because we did not perform a systematic review of VA’s internal controls outside of the three selected sites, our findings in this section cannot be generalized to other VA medical centers. At the three sites that we visited, we found that VA had billed partners in 20 of 34 revenue-generating land-use agreements for the correct amount; however, the partners in the remaining 14 agreements were not billed for Based on our analysis of the agreements, we found the correct amount.that VA underbilled by almost $300,000 of the approximately $5.3 million that was due under the agreements, a difference of about 5.6 percent. For most of these errors, we found that VA did not adjust the revenues it collected for inflation. According to the department’s guidance on sharing agreements, VA must incorporate an annual inflation adjustment to multiple-year agreements to ensure that its maintenance and operating costs—such as future utility costs—continue to be recouped, or exceeded. However, for some of these incorrectly billed agreements, the sharing partners paid the correct amount of rent as specified in the agreement even though the bill stated an incorrect amount. In addition, we found that the West Los Angeles medical center inappropriately coded the billing so that the proceeds of its sharing agreements, which totaled over $500,000, were sent to its facilities account. According to the West Los Angeles chief fiscal officer, these proceeds were mainly used to fund maintenance salaries. However, according to VA policy, proceeds from sharing agreements are required to be deposited in the medical care appropriations account that benefits veterans. According to the policy for sharing agreements, each agreement must include the amount of rent for the space, when the rent is expected to be paid, and the number of payments to be made over a specified period by the sharing partners. In the absence of a bill from VA, the sharing partner still is required to make payments as stipulated under the agreement. However, at all three sites that we visited, we found problems with the billing of rent for the land-use agreements: At the New York City location, VA officials were not aware that a sharing partner—an academic department with the local university— renewed its agreement to remain at the VA in 2008. As a result and according to a VA New York fiscal official, VA did not bill the sharing partner for several years’ rent that totaled over $1 million. After it discovered this error, VA began to take collection action on the unpaid rent in 2012, but over $200,000 in delinquent rent remained outstanding as of April 2014. At the West Los Angeles location, officials did not send periodic invoices to sharing partners as required by policy or under its agreements. As a result, two of its sharing partners did not always submit timely payments. And in a third case, VA has not fully collected on the total amount of past due rent from a sharing partner that it did not bill as expected. Specifically, in August 2011, VA stopped billing a hospitality corporation that operated a laundry facility on the campus. Since that time, the sharing partner has not made any payments as required under the terms of its agreement. The partner vacated the space in December 2013, and owes hundreds of thousands of dollars to VA. A contracting officer in Long Beach, who is responsible for the management of the land-use agreements in West Los Angeles, stated during a February 2014 meeting with GAO that he advised the West Los Angeles location to evict the sharing partner for occupying VA space beyond its agreement term because they were “trespassing” and lacked authorization to remain in the space. The contracting official also stated that VA should bill the sharing partner for the rent due and, if necessary, seek guidance to initiate available collection actions. A West Los Angeles VA official acknowledged that eviction was one of the options that could be pursued; however, the medical center continued to allow the sharing partner to remain in the space so that the agreement could be terminated “amicably.” During our visit in December 2013, that same West Los Angeles official also stated that VA would continue to negotiate with the sharing partner on the final payment to be received; and those negotiations would take into account the value of certain inventory items and parts that the partner left in the space. This official later reported to GAO that, as of May 2014, VA would bill the sharing partner for the full amount of past due rent without offsetting the value of the property remaining in the space. We asked for a copy of the letter that would be sent to the sharing partner, but as of June 2014 VA had not provided it. The medical center at West Los Angeles also did not bill a federal government agency sharing space at the Sepulveda Ambulatory Care Center during fiscal year 2012. Instead, the medical center submitted the bill for about $480,000 to the federal agency on October 1, 2012, the day after the end of fiscal year 2012. As a result, the sharing partner did not make any monthly rental payments during fiscal year 2012. The sharing partner subsequently made the full rental payment in November 2012. At the North Chicago medical center, VA officials did not bill one of its sharing partners for about $3,000 for the month of August 2012. Officials were not aware that they had not billed for this agreement until we brought the matter to their attention in January 2014. According to a VA official, the North Chicago medical center submitted a bill to the sharing partner in June 2014. VA officials acknowledged that the department did not perform systematic reviews of the billings and collections practices at the three medical centers, which we discuss in more detail later. Federal internal control standards state that management is to ensure that transactions are promptly recorded to maintain their relevance and value to management in controlling operations and making decisions. This applies to the entire process or life cycle of a transaction or event from its initiation and authorization through its final classification in summary records. In addition, the standards call for agencies to design control activities to help ensure that all transactions are completely and accurately recorded. These standards and OMB guidance also state that management should put in place control mechanisms and activities to enable it to enforce its directives and achieve results. Because VA lacks a mechanism that ensures its transactions are promptly and accurately recorded, VA is not consistently collecting revenues that the sharing partners owe to VA at these three medical centers. At two of the three sites we visited, we found that VA could improve collaboration amongst key staff that could enhance the collections of proceeds for its land-use agreements. Examples include the following: At the New York site, VA finance staff created spreadsheets to improve the collection of its revenues for more than 20 agreements. However, the fiscal office did not have all of the renewed contracts or amended agreements that could clearly show the rent due since the contracting office failed to inform the fiscal office of the new agreements. According to a VA fiscal official at the New York office, repeated requests were made to the contracting office for these documents; however, the contracting office did not respond to these requests by the time of our visit in January 2014. According to the New York Harbor Healthcare System director and the fiscal officials at New York, collection activities suffered because the contracting office was not responsive. At the North Chicago medical center, the finance staff identified differences between what they billed from the sharing partners and what they collected for some agreements. As a result, a North Chicago medical center finance staff official stated that the center’s staff had to undertake extra, time-consuming measures—including speaking to the sharing partners themselves—to resolve these differences. At that time, the finance staff discovered that VA was not billing for the increase in rent for inflation. North Chicago did not have a mechanism to communicate the specific terms (such as inflation adjustments) and did not have access to the land-use agreements across offices, according to another North Chicago finance official. Such sharing of information would have helped to expedite the explanation of these differences. Collaboration can be broadly defined as any joint activity that is intended to produce more public value than could be produced when organizations act alone. Best practices state that agencies can enhance and sustain collaborative efforts by engaging in several practices that are necessary for a collaborative working relationship. These practices include defining and articulating a common outcome; agreeing on roles and responsibilities; and establishing compatible policies, procedures, and other means to operate across agency boundaries. By taking additional steps to foster a collaborative environment, VHA could improve its billing and collection practices. For example, rather than contacting sharing partners to confirm the accuracy of its billing, fiscal staff in the North Chicago VA could work with the office that holds the agreements, the contracting office, to confirm the accuracy of its billing and to correct errors. Based on a walkthrough of the billing and collections process we conducted during our field visits, and an interview with a West Los Angeles VA official, we found that West Los Angeles did not utilize proper segregation of duties. Specifically, the office responsible for monitoring agreements also bills the invoices, receives collections, and submits the collections to the agent cashier for deposit. This assignment of roles and responsibilities for one office is not typical of the sites we examined. At the other medical centers we visited, these same activities were separated amongst a few offices, as outlined in VA’s guidance on deposits. Federal internal control standards state that for an effectively designed control system, key duties and responsibilities need to be divided or segregated among different people to reduce the risk of error or fraud. These controls should include separating the responsibilities for authorizing transactions, processing and recording them, reviewing them, and accepting any acquired assets. Without proper segregation of duties, risk of errors, improper transactions, and fraud increases. According to West Los Angeles officials, the medical center is considering steps to correct the segregation of duties issue by assigning certain duties to the fiscal office. However, the West Los Angeles site did not provide any details on the steps it would take or the timeline it would follow to implement these actions. Federal internal control standards emphasize the need for federal agencies to establish plans to help ensure goals and objectives can be met. Because of the lack of appropriate segregation of duties at West Los Angeles, the revenue collection- process has increased vulnerability to potential fraud and abuse. VA headquarters officials informed us that program officials located at VA headquarters do not perform any systematic review to evaluate the medical centers’ processes related to billing and collections at the local level. VA officials further informed us that VHA headquarters also lacks critical data—the actual land-use agreements—that would allow it to routinely monitor billing and collection efforts for land-use agreements across the department. Federal internal control standards require that departments and agencies assess program quality and performance over time and work to address any identified deficiencies. Further, management must continually assess and evaluate these controls to assure that the activities being used are sufficient and effective. In response to our findings, one VA headquarters official told us that the agency is considering the merits of dispatching small teams of staff from program offices located at VA’s headquarters to assist the local offices with activities such as billing and collections. However, as of May 2014, VA had not implemented this proposed action or any other mechanism for monitoring the billing and collections activity at the three medical centers. Federal internal control standards also state that management should put in place control mechanisms and activities to enable it to enforce its directives and achieve results. Until VA performs systematic reviews, VA will lack assurance that the three selected medical centers are taking all required actions to bill and collect revenues generated from land-use agreements, as expected. VA did not effectively monitor the status of the land-use agreements at the medical center level for two selected sites that we visited. As a result, we identified problems associated with many of the land-use agreements including unenforced agreement terms, expired agreements with partners remaining in VA space, and organizations occupying VA space without a written agreement. Because we did not perform a systematic review of VA’s internal controls outside of the three selected sites, our findings in this section cannot be generalized to other VA medical centers. During our site visit to West Los Angeles, we noted several sharing agreements that lacked proper enforcement. These agreements included the following: Authorization and Reporting Terms for Parking Services Agreement Not Enforced. VA did not enforce two key modification terms of a West Los Angeles sharing agreement. One modification for this agreement allowed for the receipt of in-kind considerations—such as road repaving and the installation of speed bumps—in lieu of revenue, as originally agreed. This agreement modification stipulates that the sharing partner will provide services (such as paving) as determined necessary by the contracting official. However, the medical center’s current contracting officer—an official located in the Long Beach office—stated that he had not approved any services under the agreement since his appointment in June 2012. Another provision in the modification requires the sharing partner to provide an annual reconciliation to the contracting officer. According to a West Los Angeles VA official that was previously responsible for monitoring the agreement, this report reconciles the costs of the in-kind services provided to VA to the revenues generated through the agreement each year. This official could not provide us with either documentation or information regarding any services that were provided during fiscal year 2012, including the value of such services. According to the current contracting officer in Long Beach, neither the sharing partner nor officials at West Los Angeles medical center have provided the reconciliation reports for 2012. We also requested the 2012 reconciliation report from VA West Los Angeles officials, but they could not provide us with a copy. Original Payment Terms with Nonprofit Organization Not Enforced. A West Los Angeles VA agreement with a nonprofit organization to provide space and services for homeless veterans included a rental provision that, if enforced, would have collected over $250,000 in revenue in 2012. However, according to a West Los Angeles VA official, no revenue was collected that fiscal year because the rental provision was waived. According to this same official, the waiver for the rental provision may have occurred in the late 2000s due to the nonprofit experiencing financial hardship.of the VA solicitation for award, demonstrating financial viability was one of the criteria considered in evaluating this partner. Further, VA policy requires the monitoring of sharing agreements and does not have a provision that allows for the waiving of such revenues. According to the contracting officer at the Long Beach VA office, VA has given this nonprofit organization an unfair advantage over other organizations that provide similar services by lowering its operating costs. However, from our review Agreement Terms with Golf Course Manager Not Enforced. During our site visit to West Los Angeles, we observed the installation of an irrigation system to upgrade a nine-hole golf course (shown in fig. 1) located at the medical center. As part of this agreement, the partner managing the golf course is required to obtain prior approval from the VA contracting officer before making any improvements to VA’s property. The Long Beach contracting officer told us that, he was unaware of the improvements to the golf course and had not authorized them, in contrast to what was stipulated in the agreement. Improper Subleasing of VA Space. VA guidance does not allow sharing partners to sublease the space obtained through sharing agreements. However, we found that a nonprofit organization—a botanic garden—subleased its space to two other organizations, including an exotic bird sanctuary and a food pantry. The Long Beach VA contracting officer told us that he was not aware of this sublease prior to our audit. We found expired agreements at two of the three VA medical centers we reviewed where the sharing partners were still occupying the space. At the West Los Angeles medical center, a university athletics department, a laundry-services company, and a soccer club occupied VA space after their agreements had expired. According to a West Los Angeles VA official, VA did not renegotiate an extension for these agreements because of an ongoing lawsuit. The university athletics department and soccer club continued to pay rent although they generally did not fully comply with the schedule of payment terms outlined in the expired agreement. However, as previously discussed, the laundry-services company had not made any payments to VA since 2011 but remained in the building until it vacated the space in December 2013. According to the current contracting officer, he advised West Los Angeles to remove the laundry-services company from the premises, but medical center officials did not act on this advice. West Los Angeles VA officials told us that they discussed sending month-to-month tenancy letters to sharing partners that authorized them to operate on the VA property in the absence of an agreement. However, according to the contracting official at Long Beach, the letters were not sent to the partners because the lawsuit prohibited such actions. At the New York medical center, seven agreements expired and were not renewed timely. Because of the lack of monitoring, one sharing partner— a local School of Medicine—with seven expired agreements remained on the property and occupied the premises without written authorization during fiscal year 2012. Our review of VA’s policy on sharing agreements did not have any specific guidance on how to manage agreements before they expired including the renewal process. Federal internal control standards state that the policies and procedures are needed to enforce management directives such as the process for managing expiring agreements. Without such guidance, VA may find it difficult to adequately manage agreements scheduled to expire at any time or determine what specific actions should be taken when an agreement has expired. They help ensure that actions are taken to address risks. GAO/AIMD-00-21.3.1. the New York medical center had recorded in CAI. After we brought this to their attention, New York VA officials researched the owners of these antennas and could not find written agreements or records of payments received for seven antennas. VA did not have written agreements associated with these antennas. According to New York VA officials, now that they are aware of the antennas, they will either establish agreements with the tenants or disconnect the antennas. Dog Park and Baseball Fields. The City of Los Angeles has utilized a 12-acre field—Barrington Park—on VA property for recreational use (e.g., dog park and baseball fields) without a written agreement. The city has posted signs about local ordinances at the site, which purport to show the space is under the city’s jurisdiction. VA is forgoing potential revenue for use of this facility by not having a written agreement in place. In the absence of a written agreement, it is also unclear what party should respond to any emergency situation that may occur at the park and fields. The lack of an agreement in this instance could potentially increase VA’s risk of liability. VA officials stated there could be a number of reasons that these spaces lacked agreements such as agreements could have been disposed of or misplaced. VA officials acknowledged that agreements are not centrally managed or stored and that CAI does not include all terms of the agreements that are needed for monitoring activity. However, VA’s guidance calls for written sharing agreements with all non-VA partners. Further, federal internal control standards state that all transactions and other significant events need to be clearly documented, and the documentation should be readily available for examination and the documentation should be properly managed and maintained. We found that VA had not established mechanisms to monitor the various agreements at the West Los Angeles and New York medical centers. VA officials acknowledged that they had not performed systematic reviews of these agreements and had not established mechanisms to enable them to do so. Federal internal control standards also state that management should put in place control mechanisms and activities to enable it to enforce its directives and achieve results. Federal internal control standards require that departments and agencies assess program quality and performance over time and work to address any identified deficiencies. Further, management must continually assess and evaluate these controls to assure that the activities being used are effective. Without a mechanism for accessing land-use agreements to perform needed monitoring activities, VA lacks reasonable assurance that the partners are meeting the agreed-upon terms, agreements are renewed as appropriate, and agreements are documented in writing, as required. This is particularly important if sharing partners are using VA land for purposes that may increase risk to VA’s liability. Finally, with lapsed agreements, VA not only forgoes revenue, but it also misses opportunities to provide additional services to veterans in need of assistance and to enhance its operations. For the past decade, we have reported that the management of federal property is at risk for fraud, waste, and abuse. As one of the U.S. government’s largest property holders, many of the issues we identified across the federal government can be found in VA’s management of its underutilized and vacant property. VA’s system for managing its numerous land-use agreements, including its system for recording associated revenues and benefits, is in need of corrective action. Because we found that the VHA data maintained in CAI are unreliable, these data cannot be used to accurately and reliably manage the bulk of VA’s land-use agreements as intended. Developing a mechanism to assess the accuracy, validity, and completeness of land-use agreement data in CAI would better position VHA to reliably account for the current land-use agreements and the associated revenues that they generate. VA has opportunities to enhance the effectiveness of its land-use agreements processes at the three selected medical centers. As noted in our report, deficiencies in its monitoring of the billing and collection of revenues have impaired VA in the timely collection of all revenues due from its sharing partners and the proper recording of the revenues to its medical-care appropriations account at one of the medical centers. In addition, VA did not have mechanisms in place at two medical centers to ensure that different individuals charged with the responsibility of executing and managing agreements and collecting revenues worked together in a collaborative manner. Further, VA lacked adequate processes that enabled it to readily access land-use agreements and perform monitoring of the execution of its land-use agreements at two of the three selected sites, which resulted in land-use agreement terms not being properly documented in writing or being enforced by VA and the failure to execute renewals when the agreements have expired. The ineffective monitoring of land-use agreements at the VA medical centers is further exacerbated by the lack of any detailed guidance by VA on how to manage the expiration of land-use agreements. Finally, the lack of appropriately segregated duties at its West Los Angeles medical center is also problematic and needs to be immediately addressed; however, officials at that medical center have not developed a plan for doing so. This lapse of a key internal control increases the likelihood that revenues from land-use agreements may be vulnerable to potential fraud and abuse. Until VA effectively addresses these weaknesses, it will likely continue to miss opportunities to maximize revenues that can be used to offset VA operational costs—thereby placing a higher burden on taxpayers—or provide additional services to veterans in need of assistance. In order to improve the quality of the data collected on specific land-use agreements (i.e., sharing, outleases, licenses, and permits), enhance the monitoring of its revenue process and monitoring of agreements, and improve the accountability of the VA in this area, we recommend that the Secretary of Veterans Affairs take the following six actions: develop a mechanism to independently verify the accuracy, validity, and completeness of VHA data for land-use agreements in CAI; develop mechanisms to: monitor the billing and collection of revenues for land-use agreements to help ensure that transactions are promptly and accurately recorded at the three medical centers; foster collaboration between key offices to improve billing and collections practices at the New York and North Chicago medical centers; and access and monitor the status of land-use agreements to help ensure that agreement terms are enforced, agreements are renewed as appropriate, and all agreements are documented in writing as required at the New York and West Los Angeles selected medical centers; develop a plan for the West Lost Angeles medical center that identifies the steps to be taken, timelines, and responsibilities in implementing segregation of duties over the billing and collections process; and develop guidance on managing expiring agreements at the three medical centers. We provided the Departments of Veterans Affairs with a draft of this report for its review and comment. In its written comments, reprinted in appendix II, the Department concurred with our recommendations and provided technical comments that we incorporated, as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees and the Secretary of Veterans Affairs. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you have any questions concerning this report, please contact me at (202) 512-6722 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. On August 29, 2013, a federal judge found that certain sharing agreements in the West Los Angeles medical center were unauthorized under the land-use authority under which they were executed. This authority states that the Secretary of Veterans Affairs may enter into agreements to share health care resources with health care providers in support of the Department of Veterans Affairs’ (VA) mission. As a result, the federal judge voided several sharing agreements with entities other than health care providers; thus, the district court case called into question whether VA can enter into sharing agreements with entities other than health care providers. The case is under appeal at the United States Court of Appeals for the Ninth Circuit. If this opinion stands, this ruling may affect other sharing agreements VA holds with nonmedical providers nationwide. Our review of VA’s land-use agreements at West Los Angeles, North Chicago, and New York found just over 40 percent of VA’s sharing agreements were with nonmedical providers, such as telecommunication companies that lease space for rooftop antennas, and collectively generate hundreds of thousands of dollars in revenue each year. In addition to the contact named above, Matthew Valenta (Assistant Director), Erika Axelson, Carla Craddock, Debra Draper, Olivia Lopez, Elke Kolodinski, Edward Laughlin, Barbara Lewis, Paul Kinney, Jeffrey McDermott, Maria McMullen, Linda Miller, Lorelei St. James, April VanCleef, Shana Wallace, and William Woods made key contributions to this report.
VA manages one of the nation's largest federal property portfolios. To manage these properties, VA uses land-use authorities that allow VA to enter into various types of agreements for the use of its property in exchange for revenues or in-kind considerations. GAO was asked to examine VA's use of land-use agreements. This report addresses the extent to which VA (1) maintains reliable data on land-use agreements and the revenue they generate, (2) monitors the billing and collection processes at selected VA medical centers, and (3) monitors land-use agreements at selected VA medical centers. GAO analyzed data from VA's database on its land-use agreements for fiscal year 2012, reviewed agency documentation, and interviewed VA officials. GAO also visited three medical centers to review the monitoring of land-use agreements and the collection and billing of the associated revenues. GAO selected medical centers with the largest number of agreements or highest amount of estimated revenue. The site visit results cannot be generalized to all VA facilities. According to the Department of Veterans Affairs' (VA) Capital Asset Inventory system—the system VA utilizes to record land-use agreements and revenues—VA had hundreds of land-use agreements with tens of millions of dollars in estimated revenues for fiscal year 2012, but GAO's review raised questions about the reliability of those data. For example, one land-use agreement was recorded 37 times, once for each building listed in the agreement, 13 agreements terminated before fiscal year 2012 had not been removed from the system, and more than $240,000 in revenue from one medical center had not been recorded. VA relies on local medical center staff to enter data timely and accurately, but lacks a mechanism for independently verifying the data. Implementing such a mechanism and working with medical centers to make corrections as needed would better position VA to reliably account for its land-use agreements and the associated revenues they generate. GAO found weaknesses in the billing and collection processes for land-use agreements at three selected VA medical centers due primarily to ineffective monitoring. For example, VA incorrectly billed its sharing partners for 14 of 34 agreements at the three centers, which resulted in VA not billing $300,000 of the nearly $5.3 million owed. In addition, at the New York center, VA had not billed a sharing partner for several years' rent that totaled over $1 million. VA began collections after discovering the error; over $200,000 was outstanding as of April 2014. VA stated that it did not perform systematic reviews of the billing and collection practices at the three centers and had not established mechanisms to do so. VA officials at the New York and North Chicago centers stated that information is also not timely shared on the status of agreements with offices that perform billing due to lack of collaboration. Until VA addresses these issues, VA lacks assurance that it is collecting the revenues owed by its sharing partners. VA did not effectively monitor many of its land-use agreements at two of the centers. GAO found problems with unenforced agreement terms, expired agreements, and instances where land-use agreements did not exist. Examples include the following: In West Los Angeles, VA waived the revenues in an agreement with a nonprofit organization—$250,000 in fiscal year 2012 alone—due to financial hardship. However, VA policy does not allow revenues to be waived. In New York, one sharing partner—a local School of Medicine—with seven expired agreements remained on the property and occupied the premises without written authorization during fiscal year 2012. The City of Los Angeles has used 12 acres of VA land for recreational use since the 1980s without a signed agreement or payments to VA. An official said that VA cannot negotiate agreements due to an ongoing lawsuit brought on behalf of homeless veterans about its land-use agreement authority. VA does not perform systematic reviews and has not established mechanisms to do so, thus hindering its ability to effectively monitor its agreements and use of its properties. GAO is making six recommendations to VA including recommendations to improve the quality of its data, foster collaboration between key offices, and enhance monitoring. VA concurred with the recommendations.
The act’s purposes are to provide Treasury with the authorities and facilities to restore liquidity and stability to the U.S. financial system while protecting taxpayers, including the value of their homes, college funds, retirement accounts, and life savings. The act also mandated that Treasury’s efforts help preserve homeownership and promote jobs and economic growth, maximize overall returns to taxpayers, and provide public accountability for the exercise of its authority. The act created OFS within Treasury to administer TARP, which in turn created a number of programs designed to address various aspects of the unfolding financial crisis. Some of those programs resulted in the government having an ownership interest in several companies. The Capital Purchase Program (CPP) is the largest program, with several hundred participants, including Citi. Created in October 2008, it aimed to stabilize the financial system by providing capital to viable banks through the purchase of preferred shares and subordinated debentures. In addition to the value of the assets purchased, these transactions require that the fixed dividends be paid on the preferred shares, that the debentures accrue interest, and that all purchases are accompanied by a warrant to purchase either common stock or additional senior debt instruments. Citi is one of several hundred participants in this program. The Targeted Investment Program (TIP) was created in November 2008 to foster market stability and thus strengthen the economy by investing in institutions that Treasury deemed critical to the functioning of the financial system. In addition to the value of the assets purchased, transactions under this program also required that the fixed dividends be paid on the preferred shares, and that all purchases be accompanied by a warrant to purchase common stock or additional senior debt instruments. TIP provided assistance to two institutions, which Treasury selected on a case-by-case basis. Citi is the only remaining participant but has recently announced plans to repay the Treasury. The Asset Guarantee Program (AGP) was created in November 2008 to provide federal government assurances for assets held by financial institutions that were deemed critical to the functioning of the U.S. financial system. Citigroup is the only institution participating in AGP. As a condition of participation, Citigroup issued preferred shares to the Treasury and the Federal Deposit Insurance Corporation (FDIC) and warrants to Treasury in exchange for their participation, along with the Federal Reserve Bank of New York (FRBNY) $301 billion of loss protection on a specified pool of Citigroup assets. The Systemically Significant Failing Institutions Program was created in November 2008 to help avoid disruptions to financial markets from an institutional failure that Treasury determined would have broad ramifications for other institutions and market activities. AIG has been the only participant in this program and was targeted because of its close ties to other institutions. Assistance provided under this program is in addition to the assistance provided by FRBNY. Under this program, Treasury owns preferred shares and warrants. Treasury now refers to this program as the AIG, Inc. Investment Program. The Automotive Industry Financing Program (AIFP) was created in December 2008 to prevent a significant disruption of the U.S. automotive industry. Treasury has determined that such a disruption would pose a systemic risk to financial market stability and have a negative effect on the U.S. economy. The program requires participating institutions to implement plans to show how they intend to achieve long-term viability. Chrysler and GM participate in AIFP. The government has a long history of intervening in markets during times of crisis. From the Great Depression to the Savings and Loan crisis of the 1980s, the government has shown a willingness to intervene in private markets when national interests are at stake. It has undertaken financial assistance efforts on a large scale, including to private companies and municipalities—for example, Congress created separate financial assistance programs totaling over $12 billion to stabilize Conrail, Lockheed, Chrysler, and the New York City government during the 1970s. Most recently, in response to the most severe financial crisis since the Great Depression, Congress authorized Treasury to buy or guarantee up to $700 billion of the “troubled assets” that were deemed to be at the heart of the crisis. The past and current administrations have used this funding to help stabilize the financial system and domestic automotive industry. While TARP was created to help address the crisis, the Treasury, Federal Reserve Board, FRBNY, and FDIC have also taken a number of steps to address the unfolding crisis. Looking at the government’s role in providing assistance to large companies dating back to the 1970s, we have identified three fundamental principles that can serve as a framework for large-scale federal financial assistance efforts and that still apply today. These principles are identifying and defining the problem, determining the national interests and setting clear goals and objectives that reflect them, and protecting the government’s interests. The federal response to the current financial crisis generally builds on these principles. Identifying and defining the problem includes separating out those issues that require an immediate response from structural challenges that will take more time to resolve. For example, in the case of AIFP, Treasury identified as a problem of national interest the financial condition of the domestic automakers and its potential to affect financial market stability and the economy at large. In determining what actions to take to address this problem, Treasury concluded that Chrysler’s and GM’s lack of liquidity needed immediate attention and provided short-term bridge loans in December 2008. Treasury also required Chrysler and GM to prepare restructuring plans that outlined how the automakers intended to achieve long-term financial viability and provided financial assistance to help them through the restructuring process. Determining national interests and setting clear goals and objectives that reflect them requires deciding whether a legislative solution or other government intervention best serves the national interest. For example, during the recent crisis Congress determined that government action was needed and Treasury determined that the benefits of intervening to support what were termed “systemically significant” institutions far exceeded the costs of letting these firms fail. As we have also seen during the current crisis, companies receiving assistance should not remain under federal protection indefinitely, and as we discuss later, Treasury has been clear that it wants to divest as soon as practicable. Because large-scale financial assistance programs pose significant financial risk to the federal government, they necessarily must include mechanisms to protect taxpayers. Four actions have been used to alleviate these risks in financial assistance programs: Concessions from others with a stake in the outcome—for example, from management, labor, and creditors—in order to ensure cooperation and flexibility in securing a successful outcome. For example, as a condition of receiving federal financial assistance, TARP recipients had to agree to limits on executive compensation and GM and Chrysler had to use their “best efforts” to reduce their workers’ compensation to what workers at foreign automakers receive. Controls over management, including the authority to approve financial and operating plans and new major contracts, so that any restructuring plans have realistic objectives and hold management accountable for achieving results. Under AIFP, Chrysler and GM were required to develop restructuring plans that outlined their path to financial viability. In February 2009, the administration rejected both companies’ restructuring plans, and required them to develop more aggressive ones. The administration subsequently approved Chrysler’s and GM’s revised plans, which included restructuring the companies through the bankruptcy code. Adequate collateral that, to the extent feasible, places the government in a first-lien position in order to recoup maximum amounts of taxpayer funds. While Treasury was not able to fully achieve this goal given the highly leveraged nature of Chrysler and GM, FRBNY was able to secure collateral on its loans to AIG. Compensation for risk through fees and/or equity participation, a mechanism that is particularly important when programs succeed in restoring recipients’ financial and operational health. In return for the $62 billion in restructuring loans to Chrysler and GM, Treasury received 9.85 percent equity in Chrysler, 60.8 percent equity and $2.1 billion in preferred stock in GM, and $13.8 billion in debt obligations between the two companies. These actions have been important in previous financial crises, but the shear size and scope of the current crisis has presented some unique challenges that affected the government’s actions. For example, as discussed later, as Treasury attempted to identify program goals and determine, which ones would be in the national interest, its goals were broad and often conflicted. Likewise, while steps were taken to protect taxpayer interests, some actions resulted in increased taxpayer exposure. For example, preferred shares initially held in Citi offered more protection to taxpayers than the common shares into which they were converted. However, the conversion strengthened Citi’s capital structure. In the next section, we discuss the federal government’s actions in the current crisis that resulted in it having an ownership interest and provide information on how the government is managing its interests. In addition to these principles, we have also reported on important considerations for Treasury in monitoring and selling its ownership interest in Chrysler and GM, which may also serve as useful guidelines for its investments in AIG and Citi as well. The considerations that we identified, based on interviews with financial experts and others, include the following: Retain necessary expertise. Experts stressed that it is critical for Treasury to employ or contract with individuals with experience managing and selling equity in private companies. Individuals with investment, equity, and capital market backgrounds should be available to provide advice and expertise on the oversight and sale of Treasury’s equity. Monitor and communicate company, industry, and economic indicators. All of the experts we spoke with emphasized the importance of monitoring company-specific indicators and broader economic indicators such as interest rates and consumer spending. Monitoring these indicators allows investors, including Treasury, to determine how well the companies, and in turn the investment, are performing in relation to the rest of the industry. It also allows an investor to determine how receptive the market would be to an equity sale, something that contributes to the price at which the investor can sell. To the extent possible, determine the optimal time and method to divest. One of the key components of an exit strategy is determining how and when to sell the investment. Given the many different ways to dispose of equity—through public sales, private negotiated sales, all at once, or in batches—experts noted that the seller’s needs should inform decisions on which approach is most appropriate. Experts noted that a convergence of factors related both to financial markets and to the company itself create an ideal window for an IPO; this window can quickly open and close and cannot easily be predicted. This requires constant monitoring of up-to-date company, industry, and economic indicators when an investor is considering when and how to sell. Manage investments in a commercial manner. Experts emphasized the importance of Treasury resisting external pressures to focus on public policy goals over focusing on its role as a commercial investor. For example, some experts said that Treasury should not let public policy goals such as job retention interfere with its goals of maximizing its return on investment. Nevertheless, one expert suggested that Treasury should consider public policy goals and include the value of jobs saved and other economic benefits from its investment when calculating its return, since these goals, though not important to a private investor, are critical to the economy. Treasury ownership interests differ across the institutions that have received federal assistance, largely because of differences in the types of institutions and the nature of the assistance they received. Initially, Treasury had proposed purchasing assets from financial institutions as a way of providing liquidity to the financial system. Ultimately, however, Treasury determined that providing capital infusions would be the fastest and most effective way to address the initial phase of the crisis. As the downturn deepened, Treasury provided exceptional assistance to a number of institutions including AIG, Citi, Chrysler, and GM. In each case, it had to decide on the type of assistance to provide and the conditions that would be attached. In several cases, the assistance resulted in the government obtaining an ownership interest that must be effectively managed. First, Treasury has committed almost $70 billion of TARP funds for the purchase of AIG preferred stock, $43.2 billion of which had been invested as of September 30, 2009. The remainder may be invested at AIG’s request. As noted earlier, FRBNY has also provided secured loans to AIG. In consideration of the loans, AIG deposited into a trust convertible preferred shares representing approximately 77.9 percent of the current voting power of the AIG common shares after receiving a nominal fee ($500,000) paid by FRBNY. The trust is managed by three independent trustees. The U.S. Treasury (i.e., the general fund), not the Department of the Treasury, is the sole beneficiary of the trust proceeds. Second, Treasury purchased $25 billion in preferred stock from Citi under CPP and an additional $20 billion under TIP. Each of these preferred stock acquisitions was also accompanied by a warrant to purchase Citi common stock. Treasury has also received $4.03 billion in Citi preferred stock through AGP as a premium for Treasury’s participation in a guarantee against losses on a defined pool of $301 billion of assets owned by Citi and its affiliates. As part of a series of transactions designed to strengthen Citi’s capital, Treasury exchanged all its preferred shares in Citi for a combination of common shares and trust-preferred securities. This exchange, which was completed in July 2009, gave Treasury an almost 34 percent common equity interest in the bank holding company. Finally, under AIFP Treasury owns 9.85 percent of the common equity in the restructured Chrysler and 60.8 percent of the common equity, plus $2.1 billion in preferred stock in the restructured GM. Treasury’s ownership interest in the automakers was provided in exchange for the assistance Treasury provided before and during their restructurings. The restructured Chrysler is to repay Treasury $7.1 billion of the assistance as a term loan, and the restructured GM is to repay $7.1 billion of the assistance as a term loan. Recognizing the challenges associated with the federal government having an ownership interest in the private market, the administration developed several guiding principles for managing its TARP investments. According to Treasury, it has developed core principles that will guide its equity investments going forward, which are discussed in detail in OFS’s financial report. Acting as a reluctant shareholder. The government has no desire to own equity stakes in companies any longer than necessary and will seek to dispose of its ownership interests as soon as it is practical to do so—that is, when the companies are viable and profitable and can contribute to the economy without government involvement. Not interfering in the day-to-day management decisions of a company in which it is an investor. In exceptional cases, the government may determine that ongoing assistance is necessary but will reserve the right to set upfront conditions to protect taxpayers, promote financial stability, and encourage growth. When necessary, these conditions may include restructurings similar to that now under way at GM and changes to help ensure a strong board of directors. Ensuring a strong board of directors. After any up-front conditions are in place, the government will protect the taxpayers’ investment by managing its ownership stake in a hands-off, commercial manner. Any changes to boards of directors will be designed to help ensure that they select management with a sound long-term vision for restoring their companies to profitability and ending the need for government support as quickly as possible. The government will not interfere with or exert control over day-to-day company operations, and no government employees will serve on the boards or be employed by these companies. Exercising limited voting rights. As a common shareholder, the government will vote on only core governance issues, including the selection of a company’s board of directors and major corporate events or transactions. While protecting taxpayer resources, the government has said that it intends to be extremely disciplined as to how it uses even these limited rights. Treasury’s investments have generally been in the form of nonvoting securities. For example, the preferred shares that Treasury holds in financial institutions under CPP do not have voting rights except in certain limited circumstances, such as amendments to the charter of the company or in the event that dividends are not paid for several quarters (in which case Treasury has the right to elect two directors to the board). However, the agreements that govern Treasury’s common ownership interest expressly state that Treasury does not have the right to take part in the management or operation of the company other than voting on certain issues, which are summarized in the following table (table 1). The AIG trust created by FRBNY owns shares that carry 77.9 percent of the voting rights of the common stock. FRBNY has appointed three independent trustees who have the power to vote and dispose of the stock with prior FRBNY approval and after consultation with Treasury. The trust agreement provides that the trustees cannot be employees of Treasury or FRBNY, and Treasury does not control the trust or direct the actions of the trustees. Treasury also owns AIG preferred stock, which does not have voting rights except in certain limited circumstances (such as amendments to the charter) or in the event dividends are not paid for four quarters, in which case Treasury has the right to elect additional directors to the board. As a condition of receiving exceptional assistance, Treasury placed certain conditions on these companies. Specifically, the agreements with the companies impose certain reporting requirements and include provisions such as restrictions on dividends and repurchases, lobbying expenses, and executive compensation. The companies were also required to establish internal controls with respect to compliance with applicable restrictions and provide reports certifying their compliance. While all four institutions were subject to internal control requirements, as set forth in the credit and other agreements that outline Treasury’s and the companies’ roles and responsibilities, Chrysler and GM have agreed to (1) produce a portion of their vehicles in the United States; (2) report to Treasury on events related to their pension plans; and (3) report to Treasury monthly and quarterly financial, managerial, and operating information. More specifically, Chrysler must either manufacture 40 percent of its U.S. sales volume in the United States, or its U.S. production volume must be at least 90 percent of its 2008 U.S. production volume. In addition, Chrysler’s shareholders, including Treasury, have agreed that Fiat’s equity stake in Chrysler will increase if Chrysler meets benchmarks such as producing a vehicle that achieves a fuel economy of 40 miles per gallon or producing a new engine in the United States. GM must use its commercially reasonable best efforts to ensure that the volume of manufacturing conducted in the U.S. is consistent with at least 90 percent of the level envisioned in GM’s business plan. Treasury has stated that it plans to manage its equity interests in Chrysler and GM in a hands-off manner and does not plan to manage its interests to achieve social policy goals. But Treasury officials also noted that some requirements reflect the administration’s views on responsibly utilizing taxpayer resources for these companies as well as efforts to protect Treasury’s financial interests as a creditor and equity owner. As a condition of receiving exceptional assistance, all four institutions must also adhere to the executive compensation and corporate governance rules established under the act, as amended by the American Recovery and Reinvestment Act of 2009 (ARRA), which limited compensation to the highest paid executives. Treasury also created the Office of the Special Master (Special Master) to carry out this requirement. The Special Master generally rejected the companies’ initial proposals for compensating the top 25 executives and approved a modified set of compensation structures with the following features: generally limited salaries to no greater than $500,000, with the remainder of compensation in equity; most compensation paid as vested “stock salary,” which executives must hold until 2011, after which it can be transferred by executives in three equal annual installments (subject to acceleration of the company’s repayment of TARP funds); annual incentive compensation payable in “long-term restricted stock,” which requires three years of service, in amounts determined based on objective performance criteria; actual payment of the restricted stock is subject to the company’s repayment of TARP funds (in 25 percent installments); $25,000 limit on perquisites and “other” compensation, absent special no further accruals or company contributions to executive pension plans. The Special Master also made determinations about the compensation structures (but not individual salaries) of these companies’ next 75 most highly compensated employees. He rejected the proposed compensation structures for the companies subject to review, so the companies must make additional changes to their compensation structures and resubmit them for approval. One of the principles guiding the government’s management of its investments in the companies includes monitoring and communicating information from company, industry, and economic indicators. According to OFS, the asset management approach is designed to implement these guiding principles. It attempts to protect taxpayer investments and promote stability by evaluating systemic and individual risk through standardized reporting and proactive monitoring and ensuring adherence to the act and compliance with contractual agreements. Treasury has developed a number of performance benchmarks that it routinely monitors. For example, as we reported in November, Treasury will monitor financial and operational data such as cash flow, market share, and market conditions and use this information to determine the optimal time and method of sale. Similarly, for AIG and Citi, Treasury has been monitoring liquidity, capital, profits/losses, loss reserves, and credit ratings. Treasury has hired an outside asset management firm to monitor its investment in Citigroup. The valuation process includes tracking market conditions on a daily basis and collecting data on indicators such as credit spreads, bond and equity prices, liquidity, and capital adequacy. To monitor its investment in AIG, Treasury also coordinates with FRBNY in tracking liquidity, weekly cash forecasts and daily cash reports, among other indicators. As part of our ongoing work with SIGTARP, we are reviewing the extent of government involvement in the corporate governance and operations of companies that have received exceptional assistance, Treasury’s mechanisms for ensuring that companies are complying with key covenants, and the government’s management of the investment and its divestiture strategies. Today, we will highlight some of our preliminary observations from this review including observations about the advantages and disadvantages of managing these investments directly or though a trust arrangement. According to OFS, investments are managed on the individual (institutional and program) and portfolio levels. As previously discussed, the government generally does not manage the day-to-day activities of the companies. Rather, Treasury monitors the financial condition of the companies with the goal of achieving financial viability. In conducting the portfolio management activities, OFS employs a mix of professional staff and external asset managers. According to OFS, these external asset managers provide periodic market-specific information such as market prices and valuations, as well as detailed credit analysis using public information. A portfolio management leadership team oversees the work of asset management employees organized by program basis, so that investment and asset managers may follow individual investments. OFS uses this strategy to manage its investment in Citi, Chrysler, and GM, and the independent trustees of the AIG trust manage the government’s common equity interest in AIG. According to officials we interviewed, each structure—managing the investment directly or through a trust—has advantages and disadvantages. Directly managing the investments offers two significant advantages. First, it affords the government the greatest amount of control over the investment. Second, having direct control over investments better enables the government to manage them as a single portfolio. However, such a structure also has disadvantages. For example, having the government both regulate and hold an ownership interest in an institution or company could create a conflict of interest and potentially expose the government to external pressures. Treasury officials have noted that they have been contacted by members of Congress expressing concern about dealership closings, and as long as Treasury maintains ownership interests in Chrysler and GM, it will likely be pressured to influence the companies’ business decisions. Further, a direct investment requires that the government have staff with the requisite skills. For instance, as long as Treasury maintains direct control of its ownership interest in Citi, Chrysler, and GM, among others, it must have staff or hire contractors with the necessary expertise in these specific types of companies. In our previous work, we questioned whether Treasury would be able to retain the needed expertise to assess the financial condition of the auto companies and develop strategies to divest the government’s interests given the substantial decline in the number of staff and lack of dedicated staff providing oversight of its investments in the automakers. We recommended that Treasury take action to address this concern. In contrast, a trust structure puts the government’s interest in the hands of an independent third party. While the Treasury has interpreted the act as currently prohibiting placing TARP assets in a trust structure, FRBNY was able to create a trust to manage the government’s ownership interest in AIG. One potential advantage of a trust structure is that it helps to avoid any potential conflicts of interest that could stem from the government’s having both regulatory functions and its ownership interests in a company. It also mitigates any perception that actions taken with respect to TARP recipients were politically motivated or that any actions taken by Treasury were based on any “inside information” received from the regulators. Conversely, a trust structure largely removes control of the investment from the government. Finally, the trustees would also require specialized staff or contractors, would need to develop their own mechanisms to monitor the investments and analyze the data needed to assess the financial condition of the institutions or companies and decide when to divest. We are reviewing Treasury’s plans for divesting its investments and so far, have found that the strategy is evolving. Although Treasury has stated that it intends to sell the federal government’s ownership interest as soon as doing so is practical, it has yet to develop exit strategies for unwinding most of these investments. For Citi, Chrysler, and GM, Treasury will decide when and how to divest its common shares. With the exception of the TARP investments, the AIG trustees, with FRBNY approval, generally are responsible for developing a divestiture plan for the shares in the trust. For Chrysler and GM, Treasury officials said that they planned to consider all options for selling the government’s ownership stakes in each company. However, they noted that the most likely scenario for GM would be to dispose of Treasury’s equity in the company through a series of public offerings. While Treasury has publicly discussed the possibility of selling part of its equity in the company through an initial public offering (IPO) that would occur sometime in 2010, some experts we spoke with had doubts about this strategy. Two said that GM might not be ready for a successful IPO by 2010, because the company might not have demonstrated sufficient progress to attract investor interest, and two other experts noted that 2010 would be the earliest possible time for an IPO. Treasury officials noted that a private sale for Chrysler would be more likely because the equity stake is smaller. Several of the experts we interviewed agreed that non-IPO options could be possible for Chrysler, given the relatively smaller stake Treasury has in the company (9.85 percent, versus its 60.8 percent stake in GM) and the relative affordability of the company. Determining when and how to divest the government’s equity stake will be one of the most important decisions Treasury will have to make regarding the federal assistance provided to the domestic automakers, as this decision will affect the overall return on investment that taxpayers will realize from aiding these companies. Given the complexity and importance of this decision, we recently recommended that Treasury develop criteria for evaluating the optimal method and timing for divesting its equity stake. In closing, we would like to highlight three issues. First, as we have noted, having clear, nonconflicting goals is a critical part of providing federal financial assistance. Treasury, however, faces a number of competing and at times conflicting goals. For example, the goal of protecting the taxpayers’ interests must be balanced against its goal of divesting ownership interests as soon as it is feasible. Consequently, Treasury must temper any desire to exit as quickly as possible with the need to maintain its equity interest long enough for the companies to demonstrate sufficient financial progress. Second, an important part of Treasury’s management of these investments is establishing and monitoring benchmarks that will inform the ultimate decision on when and how to sell each investment. To ensure that taxpayer interests are maximized, it will be important for Treasury to monitor these benchmarks regularly. And finally, while many agree that TARP funding has contributed to the stabilization of the economy, the significant sums of taxpayer dollars that are invested in a range of private companies warrant continued oversight and development of a prudent divestiture plan. Mr. Chairman, Ranking Member Jordan, and Members of the Subcommittee, we appreciate the opportunity to discuss these critically important issues and would be happy to answer any questions that you may have. Thank you. For further information on this testimony, please contact Orice Williams Brown on (202) 512-8678 or [email protected] or A. Nicole Clowers on (202) 512-4010 or [email protected]. Contact points for our Congressional Relations and Public Affairs offices may be found on the last page of this statement. Individuals making key contributions to this testimony were Emily Chalmers, Rachel DeMarcus, Francis A. Dymond, Nancy M. Eibeck, Sarah A. Farkas, Heather J. Halliwell, Cheryl M. Harris, Debra R. Johnson, Christopher Ross, and Raymond Sendajas. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The recent financial crisis resulted in a wide-ranging federal response that included infusing capital into several major corporations. The Troubled Asset Relief Program (TARP) has been the primary vehicle for most of these actions. As a result of actions and others, the government is a shareholder in the American International Group (AIG), Citigroup Inc. (Citi), Chrysler Group LLC (Chrysler), and General Motors Company (GM), among others. As market conditions have become less volatile, the government has been considering how best to manage these investments and ultimately divest them. This testimony discusses (1) the government's approach to past crisis and challenges unique to the current crisis; (2) the principles guiding the Department of the Treasury's implementation of its authorities and mechanisms for managing its investments; and (3) preliminary views from GAO's ongoing work with the Special Inspector General for TARP on the federal government's monitoring and management of its investments. This statement builds on GAO's work since the 1970s on providing government assistance to large corporations and more recent work on oversight of the assistance and investments provided under TARP. In its November 2009 report, GAO recommended that Treasury ensure it has expertise needed to monitor its investment in Chrysler and GM and that it has a plan for evaluating the optimal method and timing for divesting this equity. Looking at the government's role in providing assistance to large companies dating back to the 1970s, we have identified principles that serve as a framework for such assistance; including identifying and defining the problem, setting clear goals and objectives that reflect the national interests, and protecting the government's interests. These actions have been important in the past, but the current financial crisis has unique challenges, including the sheer size and scope of the crisis, that have affected the government's actions. As a result, the government's response has involved actions on the national and international levels and oversight and monitoring activities tailored to specific institutions and companies. We have also reported on considerations important for Treasury's approach to monitoring its investments in the companies that received assistance. The administration developed several guiding principles for managing its ownership interest in AIG, Citigroup, Chrysler, and GM. It does not intend to own equity stakes in companies on a long-term basis and plans to exit from them as soon as possible. It reserves the right to set up-front conditions to protect taxpayers, promote financial stability, and encourage growth. It intends to manage its ownership stake in institutions and companies in a hands-off, commercial manner and to vote only on core governance issues, such as the selection of a company's board of directors. Treasury has also required companies and institutions that receive assistance to report on their use of funds and has imposed restrictions on dividends and repurchases, lobbying expenses, and executive compensation, among other things. As part of its oversight efforts, it also monitors a number of performance benchmarks. Chrysler and GM will submit detailed financial and operational reports to Treasury, while an asset management firm will monitor the data on Citi, including credit spreads, liquidity and capital adequacy. To monitor its investment in AIG, Treasury coordinates with the Federal Reserve Bank of New York in tracking liquidity and cash reports, among other indicators. Treasury directly manages its investment in Citi, Chrysler, and GM, but the common equity investment in AIG, obtained with the assistance of the Federal Reserve, is managed through a trust arrangement. Each of these management strategies has advantages and disadvantages. Directly managing the investment affords the government the greatest amount of control but could create a conflict of interest if the government both regulates and has an ownership share in the institutions and could expose the government to external pressures. A trust structure, which places the government's interest with a third party, could mitigate any potential conflict-of-interest risk and reduce external pressures. But a trust structure would largely remove accountability from the government for managing the investment. GAO is reviewing Treasury's plans for managing and divesting itself of its investments, but the plans are still evolving, and, except for Citi, Treasury has yet to develop exit strategies for unwinding the investments.
BNL conducts basic and applied research in a multitude of scientific disciplines, including experimental and theoretical physics, medicine, chemistry, biology, and the environment. BNL’s fiscal year 1996 budget was about $410 million. It employs about 3,200 people, including 900 scientists and engineers. As the operating contractor for BNL, AUI is responsible for day-to-day activities at the laboratory. Originally founded by nine universities, AUI has operated as a separate not-for-profit corporation since 1986. DOE’s Brookhaven Group and DOE’s Chicago Operations Office managed BNL for the Department. DOE’s Office of Energy Research is the principal headquarters’ organization responsible for BNL-wide programs, infrastructure, and environment, safety and health (ES&H). However, other DOE program offices, including the Office of Nuclear Energy and the Office of Environmental Management, have significant responsibilities for activities at BNL, as does the Office of Environment, Safety and Health, which also monitors and evaluates the laboratory’s activities. At the local level, the Suffolk County Health Department is responsible for ensuring that BNL and private industries operating within the county do not contaminate the underground aquifer that provides the only source of drinking water for its 1.3 million residents. As a consequence of local citizens’ sensitivity to possible contamination of the aquifer, the county has developed regulations that require underground tanks that contain potential contaminants to be lined to prevent the tanks from leaking. In 1987, after local hearings on chemical and radioactive releases at the laboratory, officials representing the county health department, DOE, and BNL signed an agreement that the laboratory would meet the county’s requirements and would strive to minimize contamination of the aquifer. The agreement also allowed county health department officials access to BNL to inspect facilities and to identify tanks and other facilities that did not adhere to the county’s requirements. The laboratory’s High Flux Beam Reactor is the larger of the laboratory’s two research reactors and is regulated by and must conform to standards that DOE and the Environmental Protection Agency (EPA) establish.Although its main purpose is to produce neutrons for scientific experiments, the reactor’s cooling water becomes contaminated with the radioactive element tritium during operations. Tritium has many uses in medicine and biological research and is commonly used in self-illuminating wrist watches and exit signs. However, tritium is a health concern if ingested or absorbed into the body in large quantities. The reactor’s 68,000-gallon spent-fuel pool has high concentrations of tritium stemming from the reactor’s operations. Built in the early 1960s, the reactor’s spent-fuel pool is made of concrete but does not have a secondary containment, such as a stainless steel liner, to protect against possible leaks. Newer reactor fuel pools must have secondary containment systems to protect against such leaks. In January 1997, the laboratory’s analysis of water samples taken near the reactor revealed concentrations of tritium that greatly exceeded EPA’s drinking water standards (some samples taken later were 32 times the standard). Laboratory officials attributed the leak to the reactor’s spent-fuel pool. Although the tritium posed little threat to the public, a firestorm of public concern erupted because BNL had delayed until 1996 installing monitoring wells near the reactor despite a 1994 agreement by laboratory staff with Suffolk County officials to do so, and BNL officials reported that the tritium had probably been leaking for at least 12 years without the laboratory’s or DOE’s knowledge. Shortly after the tritium levels were made public, DOE’s Office of Oversight, which reports to the Assistant Secretary for Environment, Safety and Health, launched an investigation of the incident. On February 14, 1997, it released a report highly critical of both BNL’s actions and DOE’s oversight performance. A second report was issued in April 1997. In addition, the Attorney General of New York State issued a report on October 16, 1997, which was critical of BNL’s and DOE’s environmental performance. The Attorney General recommended that BNL’s reactor remain idle until significant improvements are made in the laboratory’s and DOE’s environmental management practices. Nov. Jan. County tells lab that its fuel pool needs to be registered as a tank and subject to County inspection. DOE issues order requiring groundwater monitoring system. Fuel pool passes leak test. Lab disagrees that fuel pool should be listed as a tank. June DOE inspection team reports many weaknesses in lab's groundwater monitoring program. Nov. DOE report notes fuel pool may leak and there is no acccurate system for leak testing. The report does not declare pool "vulnerable" to leaks. Nov. Lab agrees to drill monitoring wells. Jan. Fuel pool passes leak test. Jan. Lab engineer recommends wells be installed and given highest priority. July Wells are installed. Jan. Tritium found at twice EPA standards from new well samples. Some samples show tritium 32 times drinking water standards. May The Secretary terminates contract with AUI. Oct. County informs lab that fuel pool must be removed or abandoned. Dec. Funds found to install wells. June Wells not funded due to budget cuts. Feb. Lab tells County wells will be installed. Mar. Fuel pool passes leak test. Jan. Fuel pool leak test performed using different technology and shows 6-9 gallons leaking per day. The series of events that led to the discovery of a tritium leak started in the mid-1980s when rising levels of tritium were first detected in groundwater on BNL. The key events are as follows: Higher than expected levels of tritium were first discovered in a drinking water well about 500 feet from the reactor in 1986. BNL officials at the time reasoned that the tritium came from local sewer lines and did not suspect the reactor’s spent-fuel pool as a source. Sewer lines were a known source of tritium. Tritium originated from condensation that forms inside the reactor building and eventually reached the laboratory’s sewer system. No further samples were taken from this well, which was closed because of high levels of other nonradioactive contaminants. In 1987, DOE and BNL officials signed an agreement with Suffolk County which stated that the laboratory would conform to the environmental provisions of the county’s sanitary code and allowed county officials to inspect BNL property for the first time. In 1988, Suffolk County, which was registering BNL’s underground tanks for eventual regulatory compliance, told the laboratory that it wanted the reactor’s spent-fuel pool listed as a tank. In 1989, BNL disagreed with the county’s position. To allay the county’s concerns, BNL said that the pool did not leak because it had successfully passed a leak test in 1989. BNL also said that two monitoring wells that were installed in 1989 near the reactor did not indicate any leaking from the reactor’s spent-fuel pool. Although BNL officials later told us that the leak test was not accurate and that the two monitoring wells they installed earlier were in the wrong location to detect the tritium contamination, BNL officials relied on these data as the basis for their confidence that the spent-fuel pool did not leak. During the late 1980s, the laboratory was coming under increasing environmental scrutiny. A 1988 DOE environmental survey reported weaknesses in BNL’s groundwater monitoring program and noted that local citizens were concerned about groundwater contamination at the laboratory. In 1989, the EPA listed BNL as a Superfund site because of an old landfill problem. New York State had listed BNL as a state Superfund site 3 years earlier. In 1990, a special DOE headquarters inspection concluded that BNL did not have an adequate groundwater monitoring program. By 1993, BNL had begun discussing the need for additional monitoring wells near the reactor. In 1993, a BNL reactor official discussed with other BNL staff the need for additional monitoring wells near the reactor. This discussion was prompted by a Nuclear Regulatory Commission information bulletin that emphasized the need to monitor potential leaks from old equipment. Using BNL’s data as support, a 1993 DOE report noted that the spent-fuel pool was not leaking. The report also noted, however, that there was no reliable means of determining if the spent-fuel pool was leaking. In early 1994, a BNL engineer proposed that monitoring wells—at a total cost of $15,000 to $30,000—be drilled near the reactor, citing the reason as “good management practice.” The proposal was given a low priority by a team of BNL and DOE officials that reviewed environment, safety and health proposals. The well proposal did not rank sufficiently high, compared with other ES&H proposals, to receive funding. BNL officials continued to believe that the spent-fuel pool was not leaking. By late 1994, Suffolk County advised the laboratory that, under its regulations, the spent-fuel pool must be upgraded or abandoned. County officials told us that their demand on the laboratory to upgrade the spent-fuel pool was part of a general effort to upgrade all tanks that were still out of compliance with their sanitary code. The officials told us that they did not suspect that the spent-fuel pool was leaking. However, in their November quarterly meeting with Suffolk County, BNL and DOE staff agreed to install monitoring wells. The agreement was made at the staff level with no apparent senior management involvement in, or knowledge of, the agreement. In late 1994, plans were begun for installing the monitoring wells. However, because of a subsequent budget cut, the wells were not funded. In early 1996, the wells were again approved for funding and were installed that July. The first samples from the new wells were taken in October and results returned in December. Additional samples were taken that month and were returned in January 1997. The additional samples reflected tritium levels far exceeding EPA’s drinking water standards. Further testing showed that an underground tritium “plume” of about 2,200 feet in length was coming from the reactor’s spent-fuel pool and had been developing for at least 12 years. On the basis of a new leak test, the pool was estimated to have been leaking from 6 to 9 gallons of tritium-contaminated water per day. The four previous leak tests in 1989, 1994, 1995, and 1996 had used less sophisticated measurement techniques that failed to show the leak. Responsibility for the conditions at BNL is shared among BNL, the Chicago Operations Office, the Brookhaven Group, and DOE headquarters managers. BNL treated the potential for a tritium leak as a low priority in the face of growing environmental concerns from the public and failed to follow through on its own commitments made by laboratory staff to local regulatory officials. DOE’s Brookhaven Group, which had line accountability over BNL activities, failed to hold the laboratory accountable for meeting its agreements with local authorities. Finally, DOE headquarters shares responsibility for perpetuating a management structure with unclear responsibility for achieving ES&H objectives. BNL officials told us they assigned a low priority to drilling the monitoring wells that could have detected the tritium leak because they believed that there was no urgency to the task. In reaching this conclusion, laboratory officials relied heavily on leak rate tests conducted by in-house personnel during 1989, 1994, 1995, and 1996 which indicated that the spent-fuel pool was not leaking. BNL officials acknowledge, in retrospect, that these tests were not carefully conducted because laboratory staff failed to accurately measure the spent-fuel pool’s evaporation rate. Tests conducted after the tritium leak was discovered more accurately accounted for evaporation rates and concluded that the pool was leaking 6 to 9 gallons per day. The officials who conducted the pool leak tests, who were part of the laboratory’s reactor division, told us that they believed the tests were accurate because repeated tests produced the same results. Staff from the laboratory’s safety and environmental protection division told us they did not question the reactor division’s tests because of a high regard for its work. However, the laboratory’s own investigation of the tritium leak concluded that the laboratory’s safety and environmental protection division should have placed more emphasis on assessing potential risk and should have questioned the reactor division on the accuracy of the test results. BNL officials also relied on well-sampling results to reinforce their position that the spent-fuel pool was not leaking, but these samples did not provide adequate coverage of the area surrounding the reactor where the spent-fuel pool was located. BNL officials relied on two wells that were installed southeast (in the general direction of the underground water flow) of the reactor in 1989. They were part of a group of 51 wells installed throughout the laboratory site in response to a need to improve BNL’s groundwater monitoring program. BNL used the results from the two monitoring wells near the reactor as further evidence that the spent-fuel pool was not leaking because water samples from these wells did not identify the tritium leak. Laboratory officials told us, in retrospect, that they erred in using the results from these wells, which were not in the correct location to detect the tritium leak. They also told us that their understanding of the hydrology at the site at the time led them to believe that the wells would adequately monitor the groundwater flow. The intensity of the public’s outcry following the announcement of the tritium leak was substantial, suggesting a lack of appreciation on the part of BNL in gauging the public’s concern for environmental and public safety matters. Several factors suggest that the public’s reaction could have been better anticipated. For example, Long Island residents have long been concerned with the quality of their drinking water and the potential harmful effects from laboratory-generated pollution. The county had been extensively monitoring for laboratory pollutants in the groundwater for years, and for tritium since 1979. Furthermore, DOE had been paying nearby residents’ costs to switch from private wells to public water systems, a policy stemming in part from past groundwater chemical contamination coming from the laboratory and from other industrial sources. DOE’s Assistant Secretary for Environment, Safety and Health; the Director of the Office of Nuclear Energy, Science and Technology; and the Director of the Office of Energy Research all told us of their dissatisfaction with BNL’s and the Brookhaven Group’s inability to develop effective ways to maintain the public’s trust. DOE’s Office of Oversight officials, who have conducted reviews of many different DOE facilities—including three other laboratories—told us that compared to other DOE facilities, BNL was relatively slow in developing mechanisms to gauge changes in the public’s attitude toward the laboratory. For example, DOE and BNL had not established a publicly accepted citizen advisory committee, such as DOE has done with some of its environmental restoration sites, and had not developed an effective strategy for anticipating the public’s concerns. The Brookhaven Group did not aggressively monitor the laboratory’s efforts to comply with an agreement made by laboratory staff to Suffolk County to install monitoring wells near the reactor. More rigorous attention to this agreement could have led to monitoring wells being installed more promptly. In their November 1994 meeting with Suffolk County officials, DOE and BNL staff agreed to install monitoring wells near the reactor. The agreement was made in response to Suffolk County’s concern about the laboratory’s progress in upgrading its many underground tanks (upgrading underground tanks was an important feature of the county’s 1987 agreement with DOE and BNL). This agreement was summarized in the minutes from the November 1994 meeting. The proposal to install the wells was reported in subsequent BNL project schedules, which were reviewed by BNL and DOE management. The informality of the agreement to install monitoring wells made at the November meeting with Suffolk County officials had several important consequences. DOE and laboratory staff told us they did not track the laboratory’s progress toward installing the wells. Also, because the agreements were made at the staff level and were documented only by informal notes, senior laboratory officials and DOE managers told us they were not aware that an agreement had been made. Thus, these managers lacked the information they needed to (1) gauge the relative importance of the staff’s recommendations to install the wells and (2) use this information to adjust funding priorities, such as reallocating funding among laboratory programs. Also, DOE has never completely reviewed the laboratory’s progress in complying with the county’s sanitary code, nor does it document its activities associated with county compliance issues. DOE has had a policy in place since 1994 that requires its staff to be accountable for “diligent follow-up and timely results from the commitments they make.” While DOE’s fiscal year 1994 and 1995 performance appraisals of BNL noted laboratory progress toward complying with the county’s sanitary code, they noted that more progress was needed. DOE headquarters, the Chicago Operations Office, and the Brookhaven Group conducted 48 evaluations of environment, safety and health related issues during fiscal years 1994 through 1996. However, the deputy manager of the Brookhaven Group told us that his office had never evaluated the laboratory’s compliance with the county’s requirements. Although the Brookhaven Group was directly accountable for BNL during the time the tritium leak went unnoticed, weaknesses in how environment, safety and health activities are budgeted and managed makes accountability unclear. There is no central budget for ES&H activities nor is responsibility clearly established for achieving ES&H goals. These weaknesses are the direct responsibility of DOE’s senior leadership. Many different headquarters program offices are responsible for environment, safety and health, and ground water monitoring activities: The Office of Nuclear Energy, Science and Technology has primary headquarters responsibility for operating the reactor. The Office of Energy Research funds operations and scientific research at the reactor; it also provides most of the funds spent at the site and operates and maintains infrastructure and general environmental compliance activities, such as groundwater monitoring. The Office of Environmental Management also conducts groundwater monitoring as part of the site’s cleanup activities; funds provided by this office are earmarked for its programs only. The varying responsibilities of these headquarters offices contributes to an unclear pattern of funding at the laboratory level. For example, the monitoring wells could have been funded by BNL’s (1) reactor division, which operates and maintains the reactor; (2) safety and environmental protection division, which manages an ES&H account derived from overhead funds; or (3) plant engineering division, which has an ES&H budget account. Plant engineering actually funded the monitoring wells because the reactor division staff did not believe it was their responsibility to pay for the wells—they wanted the safety and environmental protection division to pay for them. DOE’s complex organizational structure prevented effective accountability over the Brookhaven Group. As shown in figure 2, the Brookhaven Group was part of the Chicago Operations Office. Chicago reports to the Associate Deputy Secretary for Field Management, who is responsible to the Deputy Secretary. However, Energy Research is the “lead” program office at BNL and has direct responsibility over laboratory program activities, including environment, safety and health requirements. Yet this office reports to the Under Secretary, which is in a different chain of command. Completely outside of these chains of command is the Office of Environment, Safety and Health, which is an independent oversight office that has no direct line authority over the Brookhaven Group. In commenting on a draft of this report, DOE noted that the Office of Energy Research was only responsible for ES&H oversight of those activities at BNL that it directly funded. Further, DOE commented that while the Office of Energy Research funded the reactor, the Office of Nuclear Energy, Science and Technology had principal headquarters responsibility for ES&H and that both the Chicago Operations Office and the Brookhaven Group had the primary role for ensuring ES&H performance. We believe that DOE’s comments further illustrate the unclear accountability for ES&H at BNL. DOE’s unclear lines of authority with respect to ES&H matters is not a new issue. A 1993 DOE ES&H assessment team concluded in its review that headquarters program offices (Energy Research; Nuclear Energy, Science and Technology; and Environmental Management) “. . . do not integrate their efforts in resolving common ES&H issues . . . . Managers and staff are not clearly held accountable to ensure that ES&H programs are appropriately developed and are implemented in a formal and rigorous manner.” In its April 1997 report on BNL, DOE’s Office of Environment, Safety and Health made similar observations, concluding that there is confusion in DOE headquarters about roles, responsibilities, and authorities, especially in connection with multiprogram laboratories. The report cited a lack of clarity about the responsibility for ensuring the protection of workers and the environment in the operation of BNL. DOE’s management structure problems are long-standing: In its September 1997 report, DOE’s Laboratory Operations Board cited inefficiencies that resulted from DOE’s complicated management structure in both headquarters and the field and recommended that DOE undertake a “major effort” to rationalize and simplify its headquarters and field management structure to create a more effective line management. In October 9, 1997, testimony before the Congress, DOE’s Inspector General cited confusion in DOE’s management structure and recommended that DOE establish more direct lines of accountability for managing the national laboratories. A May 1995 DOE internal paper, prepared as part of the Department’s Strategic Alignment Initiative, concluded that the lack of clear roles and responsibilities between headquarters and field units reduces authority, creates confusion and overlapping guidance, and reduces the linkage between performance and accountability. We reported on unclear roles and responsibilities between headquarters and field offices in our 1993 report on DOE management issues. In that report, we cited examples from DOE officials on accountability confusion caused by DOE’s management structure. The DOE Office of Oversight’s report on BNL also noted a recent headquarters policy change that could further prevent field offices, such as the Brookhaven Group, from providing effective oversight of its contractors. The Office said that DOE should reconsider its direction, under contract reform, to reduce the oversight of contractors’ environment, safety and health performance. The report also noted that while DOE’s new policy is to rely more on “performance metrics,” such an approach does not serve as an effective mechanism to monitor the contractor’s day-to-day environment, safety and health performance. DOE headquarters, the Chicago Operations Office and the Brookhaven Group all share responsibility for ensuring that the evaluation criteria used in AUI’s contract reflect agreed-upon departmental priorities. DOE’s performance measures for AUI did not reflect the priority that DOE espouses for ES&H, a condition which has further impacts on the ability of its Brookhaven Group to hold the contractor accountable for high standards of ES&H performance. Specifically, only 7.5 percent of DOE’s performance evaluation criteria addressed BNL’s ES&H activities in its 1996 contract. For its 1994 and 1995 annual appraisals of laboratory activities, ES&H criteria were not specifically identified, but were part of the “Environmental Compliance” and “Reactor Safety” rating elements, and were relatively minor aspects of each year’s evaluation. DOE consistently rated AUI’s performance on these ES&H related issues either “Good” or “Excellent.” “Outstanding” was the highest available score. Prior to 1996, AUI was not rated on public trust issues. For its 1996 performance contract, an element called “Communications and Trust” was added, along with “Environment, Safety and Health.” The communications and trust element was given a 7.5 percent weight in the AUI evaluation criteria. AUI rated itself “Excellent” in both categories, but these scores were overridden by DOE to reflect “marginal” performance. DOE’s Office of Oversight report noted that measurable ES&H performance elements are not incorporated into BNL managers’ annual performance appraisals, nor are ES&H roles clearly delineated. The report also noted that some senior BNL line managers are focusing almost exclusively on scientific programs and are not being held accountable for ES&H. When we asked to examine the appraisals for BNL’s senior manager responsible for making ES&H decisions, we were advised that these appraisals were not formally documented. DOE acknowledges its management structure weaknesses. After the tritium leak was discovered in January, the Secretary eliminated the Chicago Operations Office from the reporting chain, having the Brookhaven Group report directly to headquarters. Also, DOE headquarters was heavily involved in technical decisions surrounding the tritium remediation activities and in responding to public concerns. In July 1997, DOE completed its action plan for addressing issues relating to the tritium leak. Its planned steps include better descriptions of environment, safety and health roles and responsibilities in DOE headquarters and field offices, establishing a corporate budget process for ES&H, and strengthening the Office of Energy Research’s focus on ES&H as part of its lead responsibility to oversee BNL. DOE’s action plan also has measures for changing the ES&H “culture” at BNL and expanding community outreach. The plan proposes several other initiatives, such as a Headquarters-Brookhaven Management Council, chaired by the Director of the Office of Energy Research, to better coordinate activities at the laboratory and to ensure that DOE has a site-wide perspective on ES&H funding at the laboratory and other facilities. In commenting on a draft of this report, DOE provided additional details on their action plan and other corrective actions they have taken. See appendix I for DOE’s letter. The Secretary of Energy took full responsibility for his decision to terminate DOE’s contract with AUI as BNL’s contractor. Although the Secretary has said that he received much technical and legal advice on his decision, he stressed that he ultimately terminated AUI for its lax environmental monitoring efforts and its breach of the trust and confidence of the Long Island community surrounding BNL. Figure 3 shows the chronology of events leading to the termination of AUI’s contract. Jan. Tritium concentrations found to be more than double drinking water standards. Some samples were 32 times the standards. Jan. DOE's Office of Oversight for ES&H begins study of tritium incident. Feb. Interim report by DOE's Office of Oversight finds lab at fault; cites numerous management deficiencies. Jan. Lab publicly announces elevated levels of tritium in groundwater on site. Feb. DOE Asst. Secretary for ES&H says lab Director responsible; admits DOE also made mistakes. Apr. DOE's Office of Oversight completes study of lab, concluding that since its February review, DOE and AUI actions to remediate tritium contamination were "aggressive and appropriate," but both parties share responsibility. Apr. AUI President is told that contract would be terminated. May The Secretary announces termination of contract with AUI, effective Nov. 1997, or when a new contractor assumes responsibilities. Apr. Options paper on AUI contract termination circulates at DOE. Apr. The Secretary meets with senior staff to consider terminating AUI contract. No decision is made. A day or so later, the Secretary decides to terminate contract for "convenience" of the government. May DOE rates AUI's operations as "marginal," citing delays in installing monitoring wells and other problems. The Secretary became involved in discussions of AUI with his senior staff as soon as he assumed office in mid-March of 1997. By this time, DOE had already shifted responsibility for remediating the tritium leak from the Chicago Operations Office and its Brookhaven Group to DOE’s Assistant Secretary for Environment, Safety and Health, and officials were discussing the future of AUI. The Secretary told us that widely publicized criticism of AUI and DOE by elected officials did not influence his decision to terminate AUI’s contract. Rather, he said he was moved by a growing frustration with AUI’s technical competence when dealing with the tritium incident and with its public-relations consequences. All of the senior DOE participants we interviewed said that while the tritium leak itself posed no serious health hazard, the public’s perception of the way AUI managed the problem undermined the community’s confidence in the laboratory. The Assistant Secretary for ES&H dispatched her Office of Oversight to examine the tritium situation in late January 1997. The results of this examination were a major influence on the Secretary’s decision to terminate AUI’s contract. The Office’s Interim Report released on February 14, 1997, concluded that BNL “did not rigorously analyze the potential for releases from the and was somewhat overconfident in the control of effluent from .” Many decisions were made “within lower levels of the BNL organization,” and “senior managers were not sufficiently involved in the decision processes and may not have had all the information necessary to make good decisions about the priority of . . . monitoring [the reactor’s spent-fuel pool].” The Interim Report noted that both BNL’s internal communications and communications among BNL, the Chicago Operations Office, and the Brookhaven Group “were not as effective as they should have been.” Senior managers were not sufficiently involved in decisions and lacked necessary information, while both BNL and DOE showed “weaknesses” in their approach to such issues as management, planning, and priority setting. The Office of Oversight issued its second report on BNL in April 1997. This report discussed the underlying causes of the tritium contamination. A major influence on the firing decision was the loss of the Long Island community’s trust in BNL. Following the Interim Report’s release, the Suffolk County Legislature held a public hearing on February 20, 1997, that further attracted press and public attention to the tritium contamination issue. The Assistant Secretary for ES&H told the hearing that, ultimately, BNL leadership was responsible for the tritium-leak problems, although DOE itself had “made mistakes.” Several Long Island residents expressed outrage at the way BNL had handled and publicized the incident. The Assistant Secretary for ES&H and the Director of the Office of Nuclear Energy, Science and Technology both told us that they were increasingly frustrated by AUI’s unresponsive dealings with the public, a complaint later emphasized by the Secretary. Even before the Energy Secretary was sworn in on March 13, 1997, senior DOE officials were raising the possibility that AUI’s contract might be terminated as a result of the tritium leak and its consequences. From late January 1997 on, the principal senior staff associated with the termination decision—the Assistant Secretary for Environment, Safety and Health, the Director of the Office of Energy Research, and the Director of the Office of Nuclear Energy, Science and Technology had all concluded that AUI’s leadership was unable to deal effectively with the complaints and demands for decisive action from the local community. The DOE General Counsel’s Office prepared a 10-page “options paper” during April although no signatures or dates appear on the copy provided to us. This memorandum, which DOE officials say fairly reflects the topics discussed by the Secretary and his senior staff, posed three general actions with several variations. The three main options were to (1) recompete the contract before its 1999 expiration date; (2) terminate the contract wholly or partially and select a new contractor; and (3) leave AUI in place but aggressively oversee its management. According to the Secretary’s senior advisors, DOE had the choice between terminating the contract for “cause” or for “convenience” and decided on the latter to avoid a possible legal challenge by AUI over performance criteria. Until fiscal year 1996, AUI’s annual performance appraisals had consistently reflected high ratings for its management of BNL, and its standards and conduct of environment, safety, and health matters, although rated lower, were “Good” or “Excellent.” And as late as April 1997, DOE had concluded that although “continued attention is needed,” current “DOE and BNL approaches to tritium contamination source resolution and remediation have been aggressive and appropriate.” But on Thursday, April 24, 1997, the Secretary held a final meeting with his senior staff to discuss their options for dealing with the AUI contract. They considered termination and its possible timing, noting that by postponing the actual firing for 6 months, DOE could avoid paying BNL employees severance pay. In commenting on a draft of this report, DOE said that by giving less than 6 months notice, there might be an obligation by DOE to pay BNL employees severance pay even in the almost certain event that they experienced no break in their employment at BNL when a new contract was awarded. The group reached no conclusion, and a day or two later, the Secretary decided on his own to terminate the contract. On Thursday, May 1, 1997, the Secretary arrived at BNL and met with senior scientists, telling them about his decision to terminate AUI’s contract and assuring them that he was not dissatisfied with their work but with the management of the laboratory. The Secretary said he based the decision on internal oversight reports and the unacceptable disintegration of the public’s trust in the laboratory’s management. Announcing his decision that day, he said, “I am sending a message to Long Island—and to our facilities nationwide—that I will take appropriate action to rebuild trust and to make environment, safety and health a priority.” On May 16, 1997, DOE informed AUI that it would invoke an “override” provision of their contract and rate BNL’s performance for fiscal year 1996 as “marginal” for operations. The Brookhaven Group’s manager, who is the Contract Officer, attributed the lower rating to “significant events” that caused him to “look beyond mere mechanical application” of the annual rating procedure. Specific complaints included BNL’s failure to “establish clear environmental, safety and health priorities . . .” and “honor commitment to install groundwater monitoring wells around the High Flux Beam Reactor . . . within agreed-to time . . . .” “The Department’s approval of the interim management team three days prior to its precipitous termination action led me to conclude that our corrective actions were appropriate and effective and that we were making substantial progress in improving Safety Management and the relationships with the community.” Brookhaven officials consistently assigned low priority to the possibility of tritium contamination, despite public concern that the laboratory’s operations might pollute Long Island’s sole-source aquifer. BNL officials also gave inadequate attention to honoring local environmental regulations. DOE’s resident oversight office, the Brookhaven Group, had direct responsibility for the laboratory’s ES&H performance but failed to hold BNL officials accountable for meeting all regulatory commitments. Senior DOE leadership also failed by not creating an effective management and accountability system that would ensure that all offices of DOE and its contractors met their ES&H responsibilities. DOE’s planned actions for correcting oversight and management problems at BNL are promising steps that address many of the laboratory’s current conditions. One of the most important planned actions is to clarify roles and responsibilities of all the organizations with accountability over BNL—especially the Office of Energy Research, the site’s “landlord.” Our concern is that role and responsibility weaknesses raised by DOE and summarized in this report reflect fundamental problems that have long characterized the Department’s administration of all its national laboratories, not just BNL. For, despite many calls for improvement by internal and external groups, DOE leadership has so far been unable to develop an effective structure that can hold its laboratory contractors accountable for meeting all important departmental goals and objectives. One hope for clarifying DOE’s roles and responsibilities may be found in the Government Performance and Results Act of 1993 (Results Act), which offers DOE the opportunity to raise these issues to a strategic level. DOE’s September 1997 Strategic Plan proposes success measures to “clarify ES&H roles and responsibilities” and to “annually monitor and report on ES&H expenditures and improve related internal controls.” DOE’s Strategic Plan is an integral part of the activities required to support the Results Act. GAO has been evaluating agencies’ strategic plans and has been working with the Congress to help ensure that plans meet the Results Act requirements. We provided a draft of this report to DOE and Associated Universities, Inc., for review and comment. DOE generally agreed with our summary of the events surrounding the tritium leak. DOE also commented that we accurately stated that a major reason for the termination of Associated Universities’ contract was the Long Island community’s loss of confidence in Associated Universities. However, DOE said that we failed to discuss the other factors that contributed to the loss of public confidence in relation to the Secretary’s decision to terminate the contract. DOE cites, for example, that past groundwater contamination by the laboratory was already a substantial environmental and community relations issue and that our report should have acknowledged this as a factor in the senior managers’ recommendations to the Secretary on the issue of terminating the contract. We believe that our report adequately reflects that the community’s concerns about the laboratory’s past environmental contamination were raised in the community’s conversations with the Secretary. Specifically, our report states that the Secretary ultimately terminated Associated Universities for its lax environmental monitoring efforts and its breach of the trust and confidence of the Long Island community. Also, as suggested by DOE, we clarified our report by including references to DOE’s final Office of Oversight report. DOE also described in more detail specific corrective actions it took after identifying its tritium leak and the broader steps it intends to take to improve management and oversight. Furthermore, DOE provided more details on its action plan, which was developed to address problems at both BNL and DOE. We added language in the report directing the reader’s attention to these discussions. Associated Universities generally agreed with our summary of the events surrounding the tritium leak. Associated Universities also pointed out that from February 1997 until the time of the Secretary’s decision and beyond, DOE senior managers were responsible for the decisions made at BNL, not the BNL staff or Associated Universities. We made changes in the report to reflect this point. Associated Universities further stated its belief that, in matters affecting Associated Universities, the Secretary was poorly advised by his senior managers and that attempts to reach the Secretary to discuss his decision to terminate Associated Universities’ contract were unsuccessful. Associated Universities took exception to the draft report’s statement that BNL officials gave inadequate attention to honoring local environmental regulations. We did not intend to imply that Associated Universities failed to honor all local environmental regulations. However, as our report discusses, BNL and DOE staff agreed with Suffolk County to install monitoring wells but delayed their installation in favor of higher priority projects. Senior laboratory and DOE officials told us they were unaware of the agreement made by their staff to install these wells and the wells were not funded until much later. Both the laboratory and DOE were involved in several of the discussions about the decision to install monitoring wells, and we believe both must share the responsibility. Associated Universities also provided clarifying and technical comments, which we have incorporated as appropriate. Appendixes I and II include the full text of DOE’s and Associated Universities’ respective comments and our response. As arranged with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 15 days after the date of this letter. At that time, we will send copies to the Secretary of Energy, the Director of the Brookhaven National Laboratory, and the Director, Office of Management and Budget. We will make copies available to other interested parties on request. Our review was performed from June through October 1997 in accordance with generally accepted government auditing standards. See appendix III for a description of our scope and methodology. If you or your staff have any questions about this report, please call me on (202) 512-3841. Major contributors to this report are listed in appendix IV. The following are GAO’s comments on the Department of Energy’s letter dated October 30, 1997. 1. We believe our report accurately reflects the reasons for the Secretary’s decisions. Our report discusses the community’s concerns about the laboratory’s past environmental contamination and points out that these concerns were raised in the community’s conversations with the Secretary. Specifically, our report states that the Secretary ultimately terminated Associated Universities for its lax environmental monitoring efforts and its breach of the trust and confidence of the Long Island community. 2. We have made changes to the report as appropriate in response to DOE’s comments. 3. We believe our wording accurately reflects the conditions discussed. DOE’s own investigation of the tritium leak sharply criticized the management structure and the associated unclear accountability throughout the Department’s chain of command. 4. The source of this statement is the transcript for the public hearing held by the Suffolk County Legislature on February 20, 1997, pp. 58-59. 5. The source of this comment is the Integrated Safety Management Evaluation of the Brookhaven National Laboratory, Office of Oversight, Office of Environment, Safety and Health, U.S. Dept. of Energy (Apr. 1997); “Summary Assessment” of the “Status of Actions to Remediate the HFBR Tritium Plume,” p. 13. 6. While we appreciate the reasons behind the termination of this particular contract, weaknesses in DOE’s management structure persist. Terminating a contract, while “sending a signal” that “contractors will be held accountable” does not correct the Department’s unclear management structure. The following are GAO’s comments on the Associated Universities letter dated October 27, 1997. 1. We have made changes to the report, as appropriate, in response to AUI’s comments. 2. We did not intend to imply that Associated Universities failed to honor all local environmental regulations. However, as our report discusses, BNL and DOE staff agreed with Suffolk County to install monitoring wells but delayed their installation in favor of higher priority projects. 3. We believe our wording accurately reflects the events discussed. We did not evaluate the laboratory’s compliance with other underground tanks. 4. We believe our wording accurately reflects the events discussed. EPA officials have advised us that while the tritium contamination poses little or no threat today, its long-term consequences are not certain. 5. We believe our wording accurately reflects the events discussed. BNL’s January 20, 1989, memorandum rejecting the county’s position does not indicate DOE’s involvement. 6. We believe our wording accurately reflects the events discussed. The “broad agreement” mentioned by AUI was made in 1987. The paragraph in our report describes events that occurred in 1994. 7. As we stated in our report, the “Excellent” rating mentioned by DOE prior to February 1997 referred to AUI’s self-assessment. To identify the events and decisions leading up to the discovery of the tritium leak at Brookhaven National Laboratory (BNL) and the causes of these events, we began our work by reviewing three major studies completed by the Department of Energy (DOE) and BNL. These included the DOE Office of Oversight’s February 1997 interim report on the tritium recovery efforts at the laboratory, the Office’s April 1997 final report on BNL, and the laboratory’s April 1997 report on environment, safety, and health decision-making. To improve our understanding of the matters discussed in these reports, we (1) interviewed the authors and staff of each study, (2) obtained and reviewed documents and studies discussed in the reports, and (3) discussed the results of the studies with officials from the numerous organizations involved in the tritium situation. For example, within DOE we interviewed Office of Environment, Safety and Health officials who had evaluated the tritium recovery effort and safety management processes at the laboratory; the Chicago Operations Office manager and staff who were responsible for overseeing activities of DOE’s local Brookhaven office (the Brookhaven Group) during the early 1990s; and officials of DOE’s Brookhaven Group who administered DOE’s contract with AUI and who reviewed the laboratory’s reactor, ES&H, and groundwater monitoring programs. At Associated Universities, Inc. (AUI), we interviewed the president, the former and the current laboratory director, and the vice president responsible for ES&H activities. We supplemented the information obtained during these meetings by interviewing the BNL associate director and staff responsible for operating the High Flux Beam Reactor and its spent-fuel pool and for implementing groundwater monitoring and other ES&H programs at the site. We also interviewed officials from other organizations who regulate aspects of the laboratory’s environmental efforts or its compliance with local environmental laws. These included officials from the Region II office of the U.S. Environmental Protection Agency, the Suffolk County Department of Health Services, and the state of New York’s Office of the Attorney General. To determine the reasons used by DOE to terminate its contract with AUI, we reviewed the Department’s press release and the public statements made by DOE’s Secretary and other officials concerning the termination decision. We then interviewed the Secretary of Energy to obtain his perspective on the decision and the options that he considered to improve the laboratory’s performance. We also interviewed DOE’s Assistant Secretary for ES&H, the Director of the Office Energy Research, and the Director of the Office of Nuclear Energy, Science and Technology. These were the senior departmental managers responsible for laboratory activities. We also interviewed the Department’s Deputy Assistant Secretary for Procurement and Assistance Administration, and DOE’s manager of the Brookhaven Group to determine the information that these officials provided to the Secretary concerning AUI’s performance and the options available to address the tritium situation. We supplemented this information by reviewing DOE’s evaluations of AUI’s performance prepared for fiscal years 1991 through 1996 and a DOE memorandum that summarized the options presented to the Secretary for dealing with AUI. Throughout our work, we verified the accuracy of key information by obtaining supporting documentation and by questioning apparent inconsistencies or gaps in the information presented. However, as agreed with the Committee’s staff, we did not use investigative techniques or authorities to verify that officials we interviewed provided us with all documents relevant to the tritium leak and the termination of the AUI contract. Gary Boss, Project Director Michael E. Gilbert, Project Manager Robert P. Lilly, Deputy Project Manager William Lanouette, Senior Evaluator Duane Fitzgerald, Technical Advisor Jackie Goff, Senior Attorney The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the events surrounding the leak of the radioactive element tritium from a research reactor at the Brookhaven National Laboratory and the resulting termination of Associated Universities, Inc., as the laboratory's contractor. GAO noted that: (1) because Brookhaven employees did not aggressively monitor its reactor's spent-fuel pool for leaks, years passed before tritium contamination was discovered in the aquifer near the spent-fuel pool; (2) reliance on incomplete tests of the water level in the spent-fuel pool and on sample data from monitoring wells scattered about the site led Brookhaven and Department of Energy (DOE) officials to give low priority to a potential tritium leak; (3) even after laboratory and DOE staff agreed with Suffolk County regulatory officials to install monitoring wells near the reactor in 1994, Brookhaven officials postponed their installation in favor of environmental, safety, and health activities they considered more important; (4) once the wells were installed and the high levels of tritium were discovered, the laboratory reported that the spent-fuel pool could have been leaking for as long as 12 years; (5) although the tritium poses little threat to the public, the delay in installing the monitoring wells raised serious concerns in the Long Island community about: (a) the laboratory's ability to take seriously its responsibilities for environment and for human health and safety; and (b) DOE's competence as an overseer of the laboratory's activities; (6) the responsibility for failing to discover Brookhaven's tritium leak has been acknowledged by laboratory managers, and DOE admits it failed to properly oversee the laboratory's operations; (7) DOE's on-site oversight office, the Brookhaven Group, was directly responsible for Brookhaven's performance, but it failed to hold the laboratory accountable for meeting all of its regulatory commitments, especially its agreement to install monitoring wells; (8) senior DOE leadership also shares responsibility because they failed to put in place an effective system that encourages all parts of DOE to work together to ensure that contractors meet their responsibilities for environmental, safety and health issues; (9) DOE's latest strategic plan, submitted in support of the Government Performance and Results Act of 1993, offers an opportunity to focus attention on the need to address DOE's management structure and accountability problems from a strategic perspective; and (10) the Secretary of Energy's decision to terminate Associated Universities' 50 years as the laboratory's contractor was based, according to DOE's official statements, on the laboratory's loss of the public's trust and DOE's own investigation, which concluded that the laboratory had not kept pace with contemporary expectations for the protection of the environment and human health and safety.
The Forest Service, a component of the USDA is responsible for maintaining the health, diversity, and productivity of the nation’s forests and grasslands to meet the needs of present and future generations. This mission is carried out through the use of several programs, the largest being the National Forest System. Through the National Forest System, the Forest Service manages about 192 million acres, comprising about 8.5 percent of the total surface area of the United States. On these lands, the Forest Service, among other things, supports recreation, sells timber, provides rangeland for grazing, and maintains and protects watersheds, wilderness, fish, and wildlife. In addition, the Forest Service provides financial and program support for state and private forests and undertakes research activities. The Forest Service, headed by a chief, conducts its activities through 9 regional offices, 6 research offices, 1 state and private forestry area office, the Forest Products Laboratory, and the International Institute of Tropical Forestry. In addition, the National Forest System has 155 national forest offices and more than 600 ranger district offices. The Chief of the Forest Service manages from the national office, headquartered in Washington, D.C., and provides national-level policy and direction to the field offices. The Forest Service has approximately 30,000 employees and a budget of over $5 billion to carry out its mission. The Forest Service Budget and Finance Deputy Chief/CFO is responsible for the financial accountability of funds appropriated by the Congress for Forest Service programs and reports to the Forest Service Chief. The Chief Financial Officers Act of 1990 calls for CFO Act agencies, such as USDA, to have financial management systems, including internal control, that provide complete, reliable, consistent and timely information. The Government Management Reform Act of 1994 (GMRA) requires the CFO Act agencies to prepare and have audits of annual financial statements. FFMIA builds on the foundation laid by these acts by emphasizing the need for agencies to have systems that routinely generate timely, accurate, and useful information. Specifically, FFMIA requires that the auditor report on whether the agencies’ financial management systems substantially comply with (1) federal financial management systems requirements, (2) applicable federal accounting standards, and (3) the U. S. Government Standard General Ledger (SGL) at the transaction level requirements. As authorized by GMRA, the Office of Management and Budget is responsible for identifying components of the designated CFO Act agencies that are required to have audited financial statements. OMB requires that the Forest Service, a major component of USDA, have audited financial statements. Since its first financial statement audit for the fiscal year ended September 30, 1991, the Forest Service has faced numerous serious accounting and financial reporting weaknesses that have prevented it from receiving a positive audit opinion. These are shown in table 1. In the past, we have reported and testified that the Forest Service’s (1) unreliable financial data hampers the agency’s and the Congress’ decision-making ability, (2) lack of accountability exposes the agency to mismanagement and misuse of its assets, and (3) autonomous field structure hampers efforts to achieve financial accountability. In January 1999, due to the longstanding serious accounting and financial reporting problems, we designated Forest Service financial management as a high- risk area. We continued to designate financial management at Forest Service as high-risk in our 2003 report. Since 1997, the IG and independent auditors have continued to report instances of noncompliance with certain federal financial accounting and information system requirements and internal control weaknesses related to Forest Service financial computer systems. The Forest Service, a component of USDA, uses and depends on many financial management systems and services provided by USDA, including the USDA National Finance Center (NFC). Therefore, efforts to improve controls over certain financial management computer systems and internal controls over accounting processes must be made in cooperation with USDA and NFC. For example, the Forest Service uses the USDA Foundation Financial Information System as its standard accounting system. In addition, NFC maintains and controls entry of many Forest Service transactions into FFIS. NFC also reports expenditures and collections it processes on the Forest Service’s behalf to Treasury. FFIS also depends on and receives data from feeder systems used by the Forest Service to record its transactions. Many of the Forest Service’s longstanding problems with regard to its accounting and information systems are a result of outdated technology of the financial feeder systems that transfer accounting data to FFIS. To address each of our objectives, we analyzed prior IG, consultant, and independent auditor reports including the audit report on the Forest Service’s fiscal year 2002 financial statements that described several financial management weaknesses and their effect on the Forest Service’s ability to properly account for assets worth billions of dollars entrusted to its care. Further, we examined the Forest Service’s financial management policies, procedures, and processes, including completed, ongoing and planned activities and related implementation schedules to determine the Forest Service’s progress, plans, and milestones for addressing financial management problems. We attended a Forest Service Budget and Finance planning conference and a financial statement training session conducted by the USDA CFO to gain a further understanding of Forest Service efforts to improve its financial statement compilation processes and overcome other financial management challenges. We analyzed reported financial management problems against the corrective actions taken to determine the remaining challenges. Further, we discussed the remaining challenges and the status of improvement efforts with officials from USDA and the Forest Service Office of the Chief Financial Officer, the USDA IG, and independent contractors working for the Forest Service. We also visited and interviewed financial management staff at five Forest Service field locations. We visited the Intermountain Regional Office, the largest of the National Forest regions, because it processes a wide variety of financial accounting transactions. We also visited the Southern Regional Office, National Forest of North Carolina Supervisor’s Office, Mt. Pisgah District Ranger Office, and North Carolina Research Station, each representing a different level of the financial management field organization. At each location, we interviewed staff and performed walk- throughs to obtain an understanding of accounting processes and procedures for certain accounts material to the financial statements, such as accounts receivable; property, plant, and equipment; other liabilities; and certain collections/revenues, such as timber sales. We performed our fieldwork from July 2002 through March 2003 in accordance with generally accepted government auditing standards. We requested written comments on a draft of this report from the Chief of the Forest Service or his designee. The Chief of the Forest Service provided us with written comments, which are discussed in the “Agency Comments and Our Evaluation” section and reprinted in appendix I. The Forest Service has made significant progress toward achieving financial accountability. For the first time since its initial financial statement audit that covered fiscal year 1991, the Forest Service received an unqualified or “clean” opinion on its fiscal year 2002 financial statements. To achieve this milestone, the Forest Service’s top management dedicated considerable resources and focused staff efforts to address accounting and reporting deficiencies that had prevented a favorable opinion in the past. Historically the Forest Service’s financial management systems have not generated timely and accurate financial statements for its annual audit. In addition, the Forest Service has had long-standing material weaknesses with regard to its two major assets--fund balance with Treasury and property, plant, and equipment. In the past, such weaknesses prevented the IG from validating these two line items on both the Forest Service and the USDA departmentwide financial statements. In fiscal year 2002, the Forest Service reorganized the Budget and Finance Deputy Chief/CFO area and focused staff efforts to address reporting and accounting deficiencies identified in the fiscal year 2001 financial statement audit with the goal that the fiscal year 2002 financial statements would pass audit tests. To assist in these efforts, the Forest Service hired senior financial management officials, consultants and contractors and formed a financial reports team and several reconciliation “strike” teams to improve (1) the financial statement compilation process and (2) reconciliations of its major accounts, including fund balance with Treasury and property, plant, and equipment. During fiscal year 2002, the financial reports team completed a number of efforts to improve the compilation process. For example, the team held a series of financial statement workshops for national office and field staff, updated the methodology for preparing the fiscal year 2002 financial statements, and provided the necessary information to complete the audit, such as account analyses and supporting documentation for sample transactions selected for testing. Six reconciliation strike teams, consisting of contractors with expertise in reconciliation procedures and experienced Forest Service staff, performed financial statement account reconciliations and reviews to help ensure the accuracy and timeliness of recorded accounting data and that subsidiary ledgers were reconciled to general ledger accounts. The strike teams analyzed account data, identifying accounting errors and documenting adjustments to key asset, liability, and budgetary accounts in order to achieve accurate account balances. The fund balance with Treasury team focused on reconciling material fiscal year 2002 and prior-year cash transactions. The property, plant, and equipment reconciliation team analyzed transaction data to identify inaccurate records and reconciled the general ledger to its supporting detailed records. In addition, the property, plant, and equipment strike team, in cooperation with the USDA Office of the Chief Financial Officer, the USDA IG, and consultants, worked to ensure that property documentation supported property records, inventories were complete, and property was valued correctly. Further, the property, plant, and equipment reconciliation team, worked with USDA on modifications and enhancements to certain property feeder systems. For example, in September 2002, USDA completed an automated interface with the Infrastructure Real Property Subsidiary System (INFRA) and FFIS. INFRA was revised to improve security by implementing controls such as user access restriction and password protection. Also, access to key data elements in the Personal Property System (PROP) and the Equipment Management Information System (EMIS) was restricted by September 2002 in order to address security weaknesses. At the same time, certain automated error checks were added to EMIS to help ensure data integrity. While the primary focus of the reports and reconciliations teams was to help attain a clean fiscal year 2002 audit opinion, the teams have been institutionalized to work toward sustainable report compilation and reconciliation processes. Through these established account reconciliations and analyses, the teams are able to identify many of the underlying causes of inaccurate data and out of balance conditions. Specifically, according to the Forest Service CFO management, many of the problems are caused by improper recording of transactions, FFIS system problems, faulty interfaced and integrated feeder systems, lack of consistent formal policies and procedures, lack of staff training and manual accounting processes prone to human error. By understanding the root causes, the Forest Service has resolved some of the problems identified. For example, the strike teams coordinated with USDA to correct several programming errors in FFIS that were causing inappropriate accounting. For instance, the fund balance with Treasury team found that fund transfers between Forest Service units for equipment usage, which are noncash transactions, were incorrectly recorded and reported to Treasury as cash collections. As result, the Forest Service’s fund balance account at Treasury was being overstated by these amounts. During fiscal year 2002, the Forest Service CFO management also issued new policies and procedures or revised existing ones to help ensure the quality and integrity of the financial data in FFIS and the feeder systems. To communicate these changes, the Forest Service CFO issued over 25 CFO bulletins to accounting staff as the need for accounting and reporting controls were identified. For example, the CFO issued several bulletins that provided guidance on the proper recording of transactions, such as the types of transaction codes to use when entering data into FFIS. The CFO also issued bulletins (1) requiring analysis of delinquent bills to determine their collectability and (2) to clarify documentation requirements for personal and real property transactions. Further, Forest Service management continued to emphasize the importance of financial accountability to its line managers in the field. In April 2002, the Forest Service CFO implemented a set of financial performance indicators to monitor progress of the field staff in maintaining its accounts, including progress in clearing suspense account items, monitoring collection of receivables, and compliance with CFO accounting guidance. Achieving financial accountability involves more than obtaining a clean audit opinion by producing reliable onetime year-end numbers for financial statement purposes. The Forest Service still must overcome many challenges to sustain this outcome and to reach the end goal of routinely having timely, accurate, and useful financial information. In its December 2002 report on the Forest Service’s fiscal year 2002 financial statements, the auditor, KPMG Peat Marwick LLP (KPMG), continued to identify serious material internal control weaknesses and FFMIA noncompliance issues primarily related to weaknesses in controls over financial management computer systems that could adversely affect the Forest Service’s ability to record, process, summarize, and report financial data in a timely manner. The auditor attributed many of the deficiencies identified to lack of adequately trained staff; lack of manual internal control procedures, such as supervisory reviews; and poor automated controls, such as user access, system edits and system interfaces, within the FFIS and certain feeder systems that transfer the data to FFIS. As discussed in table 2, the auditor made several recommendations to address these conditions. We support these recommendations and are not making any new recommendations in these areas. In addition, the IG, Forest Service contractors, and we have reported long- standing problems regarding the Forest Service’s financial management systems and its financial management organization. Many of the legacy feeder systems that transfer data to FFIS are antiquated technology and must be enhanced or replaced. The agency also faces the challenge of implementing a financial management field organization that supports effective and efficient day-to-day financial operations. Unless the Forest Service addresses these issues and moves to sustainable financial management processes, it will have to continue to undertake extraordinary, costly efforts, outside of its normal business processes, to sustain clean audit opinions. Further, management’s ability to routinely obtain reliable financial information to effectively manage operations, monitor revenue and spending levels, and make informed decisions about future funding needs will continue to be hampered. Our Standards for Internal Control in the Federal Government requires that agencies implement a strong internal control system that provides the framework for the accomplishment of management objectives, accurate financial reporting, and compliance with laws and regulations. It contains the specific internal control standards to be followed. These standards define internal controls as the policies, procedures, techniques, and mechanisms that enforce management’s directives. They help ensure that actions are taken to address risks and are an integral part of an entity’s accountability for stewardship of government resources. The lack of good internal controls puts an agency at risk of mismanagement, waste, fraud, and abuse. Further, without strong internal controls, an agency is unable to generate the consistent, reliable financial information needed to maintain ongoing accountability over its assets. In its fiscal year 2002 audit report on the Forest Service’s financial statements, the auditor continued to report serious internal control weaknesses with regard to the Forest Service’s two major asset accounts-- fund balance with Treasury and property, plant, and equipment. Also, KPMG reported material deficiencies related to certain estimated liabilities, payroll processes, general controls and certain application software computer controls. The following table provides a brief description of each of the reported deficiencies and recommendations for improvement. Further the auditor reported that the Forest Service’s systems did not substantially comply with the three requirements of the FFMIA--federal financial management systems requirements, applicable federal accounting standards, and the U.S. Government Standard General Ledger at the transaction level. One example of noncompliance with federal financial management systems requirements was that the Forest Service did not have required certification and accreditations of security controls performed timely on its procurement and property systems. Further, the Forest Service did not record revenue for certain collections, such as map sales and camp site reservation fees, when they were collected, as required by federal accounting standards. Instead, collections and fees were recorded in a suspense account and revenue was recognized when the money was used for other operational needs instead of when the revenue was actually earned. This practice could result in revenues and related costs being misstated on the Forest Service’s financial reports. Weaknesses in the Forest Service’s financial management systems continue to hamper its ability to achieve sustainable financial transaction processing and reporting. In the past, the IG and we have reported long-standing problems with the feeder systems that process and transfer financial information into FFIS. Several of the feeder systems that generate data used to support the financial statements predate FFIS and have antiquated technology. Because significant differences existed between the data in the FFIS general ledger and its supporting detail in the feeder systems, financial statements produced by FFIS could not be relied upon. For example, the Forest Service uses several feeder systems to support its multibillion dollar property, plant, and equipment line item in its financial statements, including (1) Infrastructure Real Property Subsidiary System (INFRA), (2) Personal Property System (PROP), and (3) Equipment Management Information Systems (EMIS). These feeder systems also rely, in some cases, on data transferred from other lower level (subsidiary) feeder systems. In prior years, material internal control weaknesses in the compilation of the property, plant, and equipment balance contributed to a disclaimer of an opinion on the Forest Service’s financial statements. In preparation for the fiscal year 2002 audit, the Forest Service engaged a consultant to perform extensive procedures to arrive at an opening (October 1, 2001) property, plant, and equipment balance using statistical sampling of property records. The existing data was examined for erroneous and duplicate records through a variety of means, including checks for mathematical accuracy and comparisons with physical records and inventories. During this process, the consultant discovered that the lack of and/or faulty interfaces between these feeder systems and FFIS resulted in erroneous postings to the property, plant, and equipment account. Although the Forest Service has made certain improvements to its property feeder systems during fiscal year 2002, more needs to be done to improve the quality and integrity of financial data in FFIS and the feeder systems. In its fiscal year 2002 report on Forest Service’s Information Technology, the auditor reported certain weaknesses in internal controls related to the feeder systems. For example, the auditor found duplicate and dropped records after data was transferred between PROP, the Purchase Order Normal Tracking and Inventory System, and the Purchase Order System. The auditor also reported that system data validation and error detection controls were ineffective in EMIS. Further, the auditor reported weaknesses related to the Automated Timber Sales Accounting System (ASTA). Specifically, there were no controls built into ASTA to prevent duplicate transactions from being recorded. As a result, field unit staff had to manually review the data to identify any transactions that were erroneously entered more than once. We visited and interviewed financial management staff at five Forest Service field offices about the accounting processes and systems used to obtain a “field” perspective on financial management problems and the status of improvement efforts. At the field offices we visited, the financial management staff told us that system issues affect their operations. For example, one field office uses the Timber Information Management (TIM) system, an upfront system used to record the initial information and produce bills for timber sales and wood product permits. Since the system does not interface with FFIS, users have to manually enter the timber sale deposits and permit sales into FFIS. Lack of an automated interface between the systems increases workload as well as the risk of input errors. Problems with the financial management systems continue to hamper the Forest Service’s ability to move to sustainable processes. Until the Forest Service resolves its systems problems, the financial statements produced by FFIS cannot be relied upon without significant manual intervention to reconcile differences between FFIS and the feeder-systems. Resolving these differences consumes personnel and other resources and limits the Forest Service’s ability to have reliable financial information on an ongoing basis for day-to-day management. Among the other challenges that the Forest Service faces is establishing an efficient and effective organization to accomplish financial management activities. The highly decentralized organizational structure of the Forest Service’s financial management presents significant challenges in achieving financial accountability. Under the current organization, financial activities are performed and recorded at the Forest Service national office, nine regional forest offices, six research stations and USDA NFC as well as at hundreds of forest and district ranger offices where many transactions originate. The decentralized financial management organization presents a significant challenge because the Forest Service’s national office financial management team is tasked with ensuring that staff at hundreds of field locations are routinely processing accounting transactions accurately and consistently, in accordance with management directives. Since February 1998, we have reported that the Forest Service’s autonomous and decentralized organizational structure could hinder management’s ability to achieve financial accountability. In March 1998, an independent contractor, hired by the Forest Service to assess the agency’s financial management and organization, also raised the issue of the agency’s autonomous organizational structure. The contractor reported that the Forest Service lacked a consistent structure for financial management practices and that each field unit was operating independently. In response to these concerns, the Forest Service conducted a Financial Management Field Operation Assessment (FOA), which was completed in March 2001. As part of the assessment, the FOA project team evaluated the current level of accountability for financial management and made six recommendations to strengthen lines of responsibility and accountability. Specifically, the team recommended that the Forest Service (1) ensure that appropriate delegation of authority is in place, (2) finalize performance measures for financial management, (3) appoint field directors as responsible financial accountability officers for their respective units, (4) appoint deputy chiefs in the national office as responsible financial accountability officers for their units, (5) provide training and develop core competencies, and (6) establish policies and guidelines addressing the development, implementation, and financing arrangements for shared services agreements related to financial activities. The Forest Service has taken several actions to address the FOA recommendations related to the autonomous field structure to improve accountability for financial management in the field and throughout the organization. For example, the agency restructured its national office financial management team to create functional lines of accountability for Budget and Finance management, under the leadership of the deputy CFO, who reports directly to the Chief of the Forest Service. The Forest Service also appointed field directors (regional foresters, research station directors, etc.) to serve as the responsible financial accountability officers for their units. Further, beginning in 2001, the Forest Service began to restructure its regional offices to mirror the national office’s financial management structure. Currently, six of the nine regional offices have consolidated budget and finance functions, under the direction of a financial director who is responsible for financial management activities in the region. Another regional office is in the process of restructuring its financial management organization. The two remaining regional offices have no definite plans to change their financial management structure. While this is a good first step in resolving the autonomy of the Forest Service field offices, the Forest Service has not determined how best to structure the regions and related suboffices to create an efficient and effective organization to accomplish financial management activities. At the five field offices we visited, the financial management field staff told us that, although progress is being made, more needs to be done to move to sustainable financial transaction processing and reporting in the field. For example, staff reported that they need more training on FFIS and updated policy and procedure manuals. They also stated that the national office needs to improve communication with the field to obtain better understanding of field business processes and to solicit more input from the field staff in developing accounting and reporting policies and procedures. The Forest Service CFO management acknowledges that creating an effective and efficient organizational structure is critical to establishing sustainable processes and addressing many of the financial management issues and challenges that Forest Service faces, including improving internal controls over its accounting functions, such as adequate supervisory review, and over other areas of weakness noted by the auditors; providing training programs and on the job training opportunities for accounting field staff; and providing adequate oversight to ensure accurate and consistent processing of accounting transactions. In 1999, we designated financial management at the Forest Service to be high risk because of serious financial and accounting weaknesses that had been identified and not corrected, in the agency’s financial statements for a number of years. We continued to designate financial management at Forest Service as high risk in our 2003 report. In order to be removed from the high-risk list, the Forest Service, at a minimum, will need to demonstrate sustained accountability over its assets on an ongoing basis. While the conditions discussed above present a major challenge to achieving financial accountability, the Forest Service has several efforts underway or planned, that if implemented, should help to resolve many of its financial management problems and to move toward sustainable financial management business processes. Such efforts are designed to address internal control and noncompliance issues identified in the fiscal year 2002 audit report as well as address feeder system and organizational issues. To assist in its efforts, the Forest Service CFO management is developing a financial management strategic plan intended to provide direction for continued improvement efforts and a mechanism to monitor and evaluate performance. To be effective, this plan should be comprehensive--providing a detailed road map of the steps, resources, and time frames for achieving the end goal of sustainable financial management. To address the fiscal year 2002 internal control and FFMIA audit findings, the fund balance with Treasury reconciliation team has documented its reconciliation procedures and is working with NFC to develop a fund balance with Treasury reconciliation process to assist in timely research and resolution of reconciling items related to fund balance with Treasury activities that are processed by NFC on the Forest Service’s behalf. According to Forest Service CFO management, the reconciliation process should be in place by August 2003. The property, plant, and equipment reconciliation team has started a project to update existing policies and procedures and plans to issue revised property, plant, and equipment manuals during fiscal year 2003. The property, plant, and equipment team is also continuing to analyze property data files and reconcile data in property feeder systems to data in FFIS monthly. In January 2003, CFO management developed and implemented an automated system to track and monitor the status of issues identified by the reconciliation teams to help ensure timely resolution. They also hired a training coordinator to develop standardized training programs and two additional staff to update all financial policy and procedure manuals. The Forest Service is also continuing to work with USDA to enhance or replace the feeder systems in an effort to resolve data transfer problems between feeder systems and FFIS. For example, it is currently exploring an option for replacing the Forest Service’s three property feeder systems with a single USDA-wide property system. A decision on the system will be made by December 2003. The Forest Service expects to begin implementing the system in fiscal year 2004. Also, the Forest Service is scheduled to pilot the Integrated Acquisition System (IAS) by fiscal year 2004. IAS is a procurement system that will replace the current purchase order system and will link to FFIS. IAS will support three major procurement processes: requisitioning, purchasing, and contracting. In addition to the efforts mentioned above, the Forest Service is evaluating options for a more efficient financial management organization. In November 2002, it formed the Financial Management Efficiency Team to assess financial management roles and responsibilities and evaluate models for an efficient financial management organization. In January 2003, the team submitted a draft proposal for financial management roles and responsibilities throughout the organization and is scheduled to submit its recommendation for a financial management organization in June 2003. According to CFO management, the team is expected to make a detailed recommendation for a consolidated accounting and fund control organization either at each regional office or within multiregional shared services centers located at selected regional offices. The Forest Service has several strategic plans that include many of the financial management improvement efforts. For example, the Forest Service prepares agencywide strategic plans and annual performance plans as required by the Government Performance and Results Act. Also, the Forest Service’s Budget and Finance Deputy Chief units prepare annual project plans. However, the agencywide strategic and performance plans are broad in scope and focus on high-level goals and objectives. The annual project plans are narrowly focused on specific short-term projects. These plans are not an adequate substitute for a comprehensive financial management implementation strategy because they do not integrate all the improvement efforts and do not include the critical elements needed to effectively manage an overall strategy that will succeed in achieving and sustaining financial accountability. Forest Service CFO management is developing a financial management strategic plan intended to provide direction for continued improvement efforts and a mechanism to monitor and evaluate performance. This plan is designed as a working tool, evolving over 3 to 5 years, which will be reviewed and updated annually. In January 2003, the plan was introduced at the Forest Service’s Budget and Finance planning conference. According to Forest Service CFO management the initial plan will be completed by June 30, 2003. To be effective, the Forest Service’s plan should combine all the financial management improvement efforts into an overall comprehensive financial management implementation strategy. Such a strategy is a critical tool for the Forest Service, serving as a road map to help in resolving financial management problems. An effective plan includes long-term and short- term plans with clearly defined goals and objectives and specific corrective actions, target dates, and resources necessary to implement those actions. A comprehensive plan also prioritizes projects and assigns accountability by identifying responsible offices and staff responsible for carrying out the corrective actions. Without such a plan, it will be difficult to fix accountability for its many efforts and effectively monitor progress against its end goals. The Forest Service has demonstrated strong leadership and commitment to reach its goal of obtaining an unqualified opinion on its fiscal year 2002 financial statements. At the same time, many of the financial management improvement efforts implemented to date are outside of normal business processes and focus mainly on obtaining reliable year-end numbers for financial statement purposes. The Forest Service still must overcome several major challenges before it can move to sustainable processes that can routinely provide accurate, relevant, and timely information to support program management and accountability. The Forest Service is at a critical juncture. If the Forest Service is to achieve and sustain financial accountability, it must fundamentally improve its underlying internal controls, including financial management computer system controls, and financial management operations. The Forest Service has various efforts underway or planned, that if successfully carried through, will be important steps toward addressing the financial management challenges it faces. However, to date, several problems identified by the IG, KPMG, and us remain. Some of Forest Service problems are deep-seated and therefore will require sustained leadership and commitment of significant resources and time to resolve. The number and significance of the issues still facing the Forest Service emphasizes the need for a comprehensive strategy to manage the various initiatives underway or planned. To help ensure sustained commitment and timely implementation of financial management improvement efforts, we recommend that the Chief of the Forest Service direct the Chief Financial Officer to develop a comprehensive financial management strategic plan that clearly defines long-term and short-term financial management goals specifies corrective actions to address financial management challenges, including internal control weaknesses, FFMIA compliance deficiencies, system problems and organization issues; includes target dates and resources necessary to implement corrective identifies the responsible parties for carrying out corrective actions; and prioritizes and links the various improvement initiatives underway and planned, including USDA financial management systems enhancement efforts. In written comments on a draft of this report, the Forest Service concurred with our recommendations to develop a comprehensive financial management strategic plan that defines financial management goals, specifies corrective actions, identifies target dates and resources needed, identifies responsible parties, prioritizes and links improvement initiatives, and provides details on financial management systems enhancements. Forest Service’s response (see appendix I) stated that preparation of a financial management strategic plan is in process. As agreed with your office, unless you publicly announce its contents earlier, we will not distribute this report for 30 days. At that time, copies of this report will be sent to the congressional committees with jurisdiction over the Forest Service and its activities; the Secretary of Agriculture; and the Director of the Office of Management and Budget. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO web site at http://www.gao.gov. If you or your staff have any questions please contact me at (202) 512-6906. Key contributors to this report were Alana Stanfield, Suzanne Murphy, Martin Eble, and Lisa Willett. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to GAO Mailing Lists” under “Order GAO Products” heading.
Since 1996, we have periodically reported on Forest Service financial management problems that we, the U.S. Department of Agriculture's (USDA) Office of the Inspector General, and other independent auditors have identified. We have designated the Forest Service financial management as a high-risk area since 1999. Because of these longstanding financial management deficiencies, the House Committee on Resource's Subcommittee on Forests and Forest Health asked GAO to report on the Forest Service's progress in correcting its financial management problems and on remaining challenges and actions underway to address those challenges. The Forest Service has made significant progress toward achieving financial accountability, receiving its first "clean" or unqualified audit opinion on its financial statements for fiscal year 2002. This was attained because top management dedicated considerable resources to address accounting and reporting deficiencies. We consider this a positive step; however, sustaining this outcome and achieving financial accountability will require more than obtaining year-end numbers for financial statement purposes. The Forest Service continues to face several major challenges, many of which resulted in unfavorable audit opinions in the past. Specifically, the Forest Service's fiscal year 2002 financial statement audit report disclosed material internal control weaknesses related to its two major asset accounts--fund balance with the U.S. Department of the Treasury, and property, plant, and equipment--as well as for certain estimated liabilities, payroll processes, computer security controls, and software application controls related to its procurement and property systems. Further, the Forest Service has not addressed the challenges of replacing or enhancing legacy feeder systems and implementing a financial management field operation that supports efficient and effective day-to-day financial operations and routinely produces reliable and timely financial information. The Forest Service has corrective actions underway or planned that are intended to resolve these problems, including a financial management strategic plan. If this plan is to serve as a "road map" toward financial accountability, the Forest Service needs to ensure that its plan is comprehensive, integrating and prioritizing the various corrective action initiatives underway and planned.
Juvenile justice is primarily the domain of state and local authorities. Thus, juvenile courts’ jurisdiction and procedures can vary widely throughout the United States. For instance, depending upon the state and the alleged offense, the juvenile courts’ jurisdiction may end at age 18, 17, 16, or even younger. Referrals of youth to juvenile justice authorities can come from various sources, including police officers, parents, schools, and social service agencies. Police officers account for 41 percent of the referrals, according to 1989 Department of Justice’s Office of Juvenile Justice and Delinquency Prevention (OJJDP) data. Generally, after an alleged status offender is referred to juvenile authorities, screening or intake staff (e.g., a juvenile probation officer) decide whether the case should be handled formally or informally. Juveniles can be temporarily placed in detention centers at some point between referral and case disposition by the court. If the intake decision were to proceed formally, a petition is drafted and filed to provide notice of the offenses that will be pursued. The petition charges the youth with a status-offense violation and identifies the youth and those other persons who should be informed of the proceedings. These proceedings include an adjudication hearing and possibly a disposition hearing. At the adjudication hearing, the juvenile court judge reviews evidence and determines if the youth has committed a status offense. At a concurrent or subsequent disposition hearing, the judge determines an appropriate action or treatment plan for the status offender. The juvenile court judge’s disposition options include dismissal of the case, probation, fine or restitution, community service, and placement. Placement refers to any “out-of-home” disposition, which usually takes place in residential facilities. These facilities provide 24-hour care to juveniles. The following are types of residential facilities: Detention centers: secure, residential facilities. Group homes: nonsecure facilities that are intended to provide a residential environment in which to meet the long-term counseling needs of troubled youth. Shelters: nonsecure facilities that are intended to provide overnight or short-term housing and crisis intervention counseling to troubled youth. Figure 1 shows the number of petitioned juvenile cases processed by the juvenile courts in 1991, according to OJJDP data. Cases handled informally usually do not involve a petition or an adjudication hearing. These informal (nonpetitioned) cases may be dismissed; possible reasons for dismissal include lack of evidence or the youth’s receiving a warning or counseling. Even when cases can be handled informally, juveniles can be given probation or even placed. As shown in figure 1, 2 percent (or 500) of all petitioned nonadjudicated cases in 1991 resulted in juveniles’ being placed. According to OJJDP, in many jurisdictions, most status-offense cases are handled informally. In many communities, county attorneys, family crisis units, or social service agencies—rather than the juvenile courts—have assumed responsibility for screening and diverting alleged status offenders from the juvenile justice system. Even though juvenile justice is primarily the responsibility of state and local authorities, Congress has taken an increased interest in juvenile justice issues during the past two decades. Most significantly, the Juvenile Justice and Delinquency Prevention Act of 1974, as amended (42 U.S.C. 5601 et seq.), established a formula grant program for states to improve their juvenile justice systems. States receive formula grant funds if they comply with certain requirements. One of these requirements was that status offenders should not be held in secure detention facilities, such as jails, police lockups, juvenile detention centers, or training schools. In 1980, Congress amended the law to allow states to detain status offenders under certain conditions and still receive their grant funds. According to OJJDP regulations, these status offenders must be provided certain procedural protections. Some child advocacy groups have raised concerns about the lack of appropriate placement services for females in the juvenile justice system. For example, in September 1992, the National Network for Runaway and Homeless Youth Services advocated the need to review gender bias within the states’ juvenile justice systems. In addition, some studies have indicated that females were more likely to be detained for status offenses than males. To aid us in defining gender bias and in designing models or approaches to address the objectives, we reviewed relevant literature identified in bibliographies provided us by NCJJ and OJJDP. Regarding the first objective, we used NCJJ’s national estimates of status-offender data for calendar years 1986 through 1991 to develop gender-specific probabilities of detentions, adjudications, and placements for status offenders by offense categories. However, these data did not contain sufficient information relevant to judges’ decisions to assess gender bias, e.g., prior offense history and source of referral for the offense. To examine gender bias, we did further analysis using data from several states that had additional variables beyond those used for NCJJ’s national estimates. We developed 6 models to study the outcome of intake decisions in 6 states and 19 models to further study detentions, adjudications, and placements in 7 states. We used a class of models commonly used in criminological research to analyze these types of outcomes. We used NCJJ’s state-specific data files to conduct regression analyses for seven states—Arizona, California, Florida, Missouri, Nebraska, South Carolina, and Utah. Data limitations precluded us from developing models for status offender intake decisions in Nebraska, placements in Arizona, and detentions in Utah. Further, we could not address possible gender bias elsewhere in the juvenile justice system because data did not exist. For example, the data did not include youths who were handled informally—picked up, counseled, and/or released by the police or by county juvenile department intake officials. To compare the availability of facilities and services, we visited a total of 15 facilities located in 9 counties—generally 2 counties (a rural county and an urban county) within each of 4 selected states (Florida, Kentucky, Maryland, and Texas). We mailed a survey to a national sample of county probation department officials to obtain (1) opinions on differences in the juvenile justice systems’ processing of status offenders and (2) perspectives on the availability of facilities and services for status offenders. By using a national sample, we were able to project the results to our study population of 1,249 chief juvenile justice probation officers. Appendix I presents more details about our objectives, scope, and methodology, including a discussion of how we selected states for analysis with respect to our second objective. Appendix V contains a copy of the survey and the survey’s results. We did our work from March 1993 through August 1994 in accordance with generally accepted government auditing standards. Since no federal agency has responsibility for the issues discussed in this report, we did not obtain official comments on a draft of this report. However, we did discuss our results with NCJJ and OJJDP officials and, where appropriate, incorporated their comments. Our analyses of 6 years of national data indicated that there were only relatively small differences in the percentages of female and male status offenders detained, adjudicated, and placed. With six exceptions, our logistic regression analyses of intake decisions, detentions, adjudications, and placements in seven states generally did not indicate any significant gender-based differences in the processing of female and male status offenders. In addition, our national survey of county probation officers and site visits did not identify any specific gender differences in juvenile justice systems. According to NCJJ national data, a total of 500,620 status-offender cases were petitioned to juvenile courts during calendar years 1986 through 1991. Of the total petitioned status-offender cases, 41.3 percent (206,756 cases) involved females and 58.7 percent (293,864 cases) involved males. In terms of gender distinctions, two specific offense categories had noticeable differences in the numbers: females were involved in 61.9 percent of the running away offenses and males were involved in 74.3 percent of the liquor offenses. Table 1 shows that petitioned female status offenders had about the same probability, or percent chance, as petitioned male status offenders of being detained, adjudicated, or placed out-of-home during 1986 through 1991, for 60 percent of the outcomes. For example, the probabilities for female and male truants who were detained, adjudicated, or placed were within 2 percentage points of each other. The exceptions were in the offense categories of running away and liquor violations. For the offense categories of liquor violation, running away, truancy, and ungovernability, our data analysis showed that the probabilities of either female or male status offenders’ being detained before disposition by the juvenile courts had declined from calendar years 1986 to 1991. For example, the probability of ungovernable female status offenders’ being detained decreased from about 19 percent to 8 percent, and the decrease for males was from 19 percent to 9 percent (see app. II, table II.3). Regarding running away, our analyses showed that males had higher probabilities than females of being detained, adjudicated, or placed. Further, males with liquor offenses had higher probabilities of being adjudicated or placed than females. Appendix II provides more detailed analyses of NCJJ national data. The national estimates did not enable us to determine whether gender bias occurred in the outcomes because these data did not contain variables that are likely to be relevant to judges’ decisions (e.g., prior offense histories). Accordingly, we developed statistical models to measure gender bias using data sets that contained appropriate variables. To analyze gender bias, we developed logistic regression models of the intake decisions for six of the seven states and the detention, adjudication, and placement decisions for seven states. Overall, the 25 models involved applications of the logistic regression procedure. That is, each state’s models contained variables that measured characteristics that may be associated with the juvenile judicial system outcomes and estimated how the characteristics influenced outcomes. These characteristics included the source of referral to the juvenile court, location (e.g., metropolitan or rural area) of the court, age and race of the offender, type of offense, and offender’s prior offense history. We used these models to test for gender bias. For the intake decisions, we analyzed all cases referring to the intake staff; for the detention and adjudication decisions, we analyzed only petitioned cases; for the placement decisions, we analyzed cases of adjudicated status offenders. Table 2 shows the “gender-bias quotients,” which were the resulting estimates of gender bias from the models that we developed. As the gender-bias quotient approaches 1.0, the amount of estimated gender bias decreases. No specific criteria exist as to the extent that the quotient would have to deviate from 1.0 to indicate gender bias. In our judgment, however, a deviation from 1.0 of more than .2 would indicate the presence of gender bias. Our results indicated that (1) in 5 of the 6 intake models, females were about equally as likely as males to be petitioned to juvenile court and (2) in 14 of the other 19 models, no gender bias was demonstrated in the juvenile justice systems’ outcomes for status offenders. In the Florida intake model, females were more likely to be petitioned to juvenile court than males because the juvenile justice system treated females’ characteristics, e.g., type of offense, differently due to their gender. In the other five models, we found some indication of gender bias in Arizona’s, Florida’s, and Nebraska’s detention decisions and Florida’s adjudication and placement decisions. These models indicated that females were less likely to be detained or placed than males because the juvenile justice system treated females’ characteristics, e.g., referrals by the police, differently due to their gender. Our conclusions about gender bias are limited to aspects of the juvenile justice process for which we had data. See appendix III for a detailed explanation of the models and the methodology. In measuring gender bias, we combined the effects of the individual variables to estimate the overall probabilities of intake decisions, detention, adjudication, and placement. By combining these effects to estimate gender bias, some variables may have had offsetting effects, regardless of whether the models showed gender bias. For example, in the Missouri intake results, which did not indicate gender bias, law enforcement and school referrals for females lowered their probability of being petitioned, but urban courts increased the probability of being petitioned. These offsetting situations occurred relatively infrequently. Our analysis showed that certain factors, such as offenders’ prior offense history and source of referral, affected the status offenders’ outcomes. For example, as would be expected, offenders’ prior offense history generally affected their detention outcomes. As the number of prior offenses increased, so did the probability that the status offenders would be detained regardless of whether they were females or males. See appendix III for a discussion of the influence of such characteristics on the intake, detention, adjudication, and placement outcomes. Table 3 shows that the probation officers who responded to our survey did not perceive any differences in the way females and males with similar status-offense histories were processed. More specifically, of the responding probation officers, we estimated that 71.6 percent did not report any differences in the referral/arrest process, 79.1 percent did not report any differences in the intake process, and 70.5 percent did not report any differences in either treatment by the court or the length and type of disposition. Regarding the detention process, 50.1 percent of the chief probation officers did not report any gender differences. However, another 41.8 percent of the chief probation officers reported “no basis” for answering this part of the question, thought the question not applicable, and/or did not answer the question. Generally, both our national survey respondents and the juvenile justice officials and facility representatives we interviewed in four states told us there were not any significant differences in the facilities and services available to female and male status offenders. However, both groups emphasized that they believed that more services were needed for status offenders, irrespective of gender. As table 4 shows, 44.4 percent of the chief probation officers who responded to our survey said that treatment options (facilities and services) were about equally available to detained female and male status offenders. However, more than one-third of the respondents— 37.8 percent—reported “no basis” for answering this question, thought the question inapplicable, or did not answer the question. Therefore, about 70 percent of those officials who responded said that the services and facilities were about equal for detained female and male status offenders. When the respondents reported a difference, the difference was generally related to the perception that there were more facilities available for males than females. Almost 16 percent of the respondents reported that facilities and services were either “somewhat more” or “much more” available for males than for females. In contrast, only 2 percent of the respondents said that facilities and services were either “somewhat more” or “much more” available for females than males. Further, many respondents indicated that the availability of facilities and services for status offenders perhaps would be more accurately described as being equally unavailable for females and males. For example, some respondents said that female and male status offenders had no treatment programs or facilities due to limited funding and resources. In addition, other respondents said that the existing services were inadequate to meet the needs of both genders. Four other respondents to our survey indicated that, even within an overall environment of limited resources for both genders, female status offenders had fewer services than males. As table 5 shows, we visited a total of 15 facilities—10 co-educational facilities and 5 serving only females or only males. Except for some health services not applicable to males (such as prenatal care), we generally did not find gender-based distinctions in the availability of counseling, educational, and medical services for females and males at each of the 10 co-educational facilities we visited. Officials at the other five facilities said that their programs were not gender-based and could be provided to either females or males. OJJDP officials pointed out that providing similar services for both females and males may be equitable but may not result in meeting the specific needs of one gender. At the 10 co-educational facilities (4 secure detention centers and 6 shelters), we did not find gender-based distinctions in counseling services offered for female and male status offenders. Generally, the four secure detention facilities did not routinely provide counseling to females or males. Facility officials told us that youth who requested counseling or who displayed suicidal tendencies were referred to community health-care providers. The officials added that their facilities basically were temporary holding centers for youth awaiting juvenile court processing and were not designed to provide treatment services. According to these officials, while many of the resident youth may need counseling and mental health services, the centers were not the appropriate facilities for providing these services. At the six shelters, the resident female and male youth were provided weekly counseling services (individual, group, or both). Individual counseling, available at all six shelters, ranged from 2 hours to 6 hours per week. Group counseling, available at five of the shelters, ranged from 4 hours to 14 hours per week. The other five facilities (four group homes and one nonresidential program), which served either females or males, also provided individual counseling (ranging from 1 hour to 4 hours per week) and group counseling (1 hour to 5 hours per week). At the 10 co-educational facilities, we did not find gender-based distinctions in the availability of educational services for female and male status offenders. Youth at these 10 facilities attended public schools or on-site schools, with 1 exception. The four group homes, each serving either only females or only males, sent youth to local public schools, an on-site alternative school, or alternative schools operated by the state and the local public school district. The 15th facility (nonresidential program) was an alternative school and, therefore, provided education on-site. According to service providers at all 15 facilities we visited, females and males received needed medical services, either at the respective facility or from local community health-care providers. Generally, we did not find gender-based distinctions in the availability (from either on-site or community sources) of medical care for females and males at the 10 co-educational facilities, except for services, such as gynecological services and prenatal care, which were not applicable to males. Admission physicals were the only gender-based difference we noted. At two of the female-only group homes, health examinations included testing (which could be refused at one of the homes) for sexually transmitted diseases, whereas, at similar male-only facilities operated by the same organizations, such testing was not done unless requested by the males. Only 5 of the 10 co-educational facilities, 4 detention facilities and 1 shelter, had on-site medical personnel. Each of these five facilities had a doctor on-site at least 1 day per week. Also, each of the four detention centers had a nurse on-site at least 5 days per week, and the shelter had a nurse on-site 3 days per week. Some of the on-site service providers told us that their facilities were often overcrowded and in need of additional medical staff. At two detention facilities, for example, officials told us that the on-site nurse could not fully treat all of the females and males on each day’s sick list. According to the officials, the nurse at this facility had to select which patients to treat. The other five co-educational facilities (five shelters) did not have doctors or nurses on site. Residents of these facilities relied on parents, guardians, or, if necessary, facility staff to provide access to community health-care services. The remaining five facilities—three group homes serving only females, one group home for only males, and the nonresidential program for females—generally did not have on-site medical personnel and, thus, relied on community health-care providers. Some officials at the shelters and group homes that did not have on-site medical personnel told us that such resources were needed for medical services. For instance, one official explained that counselors had to use their already limited counseling time to dispense medication and transport youth to doctors’ offices. Juvenile court judges, detention officials, and service providers in the nine counties we visited said that more facilities and services were needed for both female and male status offenders. Some of the juvenile justice representatives and professional staff said that early intervention services were needed for first-time offenders to divert them from further involvement with the juvenile justice system. For example, some judges said that while not all status offenders become delinquent offenders, the majority of the juvenile delinquents appearing in their courts had a previous history of status offenses. Most of the juvenile justice officials and service providers we interviewed told us that status offenders did not need gender-specific treatment or services, except for gynecological services and prenatal care for females. In fact, representatives from the female-only and male-only facilities said that their programs could be replicated to provide the same counseling and mental health services to status offenders of the opposite sex. Other officials added that gender did not play a role in determining a youth’s individual treatment needs because each youth had unique needs. Further, some service providers said that facilities should serve both females and males because the two genders would have to communicate and interact on a daily basis, such as they would in real-life situations. Some service providers pointed out that advantages exist to having single-gender facilities because distractions or anxieties could be created when both genders participate in the same counseling and treatment programs. For example, a service provider at a female-only facility told us that many of the females had experienced some form of abuse by males. Thus, according to this provider, a female-only program was more conducive to helping the females work through their feelings and build self-esteem. Appendix IV provides more details about our visits to the selected facilities in the four case-study states. We are sending copies of this report to the Attorney General; the Administrator, Office of Juvenile Justice and Delinquency Prevention; the Director, Office of Management and Budget; and other interested parties. Copies will also be made available to others upon request. Major contributors to this report are listed in appendix VI. If you have any questions about this report, please contact me on (202) 512-8777. The 1992 reauthorization (P.L. 102-586) of the Juvenile Justice and Delinquency Prevention Act of 1974 (P.L. 93-415) mandated that we study gender-bias issues in state juvenile justice systems. Specifically, we agreed with the Committees to compare the outcomes of intake decisions and frequency of detentions, adjudications, and out-of-home placements of female and male status offenders and compare the availability of facilities and services for female and male status offenders in selected jurisdictions. In addressing these objectives, we reviewed relevant literature. Regarding the first objective, we analyzed the frequency of detentions, adjudications, and out-of-home placements of petitioned status offenders by gender at the national level, and we made comparisons within selected states.Further, we analyzed intake decisions within selected states. Regarding the second objective, we visited a total of 15 facilities in 4 states. Finally, we obtained additional perspectives on these juvenile justice issues by mailing a survey to a national sample of county juvenile justice probation department officials. To develop an understanding of gender-bias issues associated with state juvenile justice systems, we reviewed relevant literature identified in bibliographies provided us by the National Center for Juvenile Justice (NCJJ) and the Department of Justice’s Office of Juvenile Justice and Delinquency Prevention (OJJDP). Our review of the literature aided us in defining gender bias and in designing models to conduct our analyses of intake decisions, detentions, adjudications, and out-of-home placements in selected states. To develop national statistics comparing the frequency that female and male status offenders were detained, adjudicated, and placed, we used juvenile court data collected annually by NCJJ. Each year, NCJJ collects juvenile court case-level data from various states and jurisdictions and assigns weights to the data, which permits projecting the data to produce national estimates of cases disposed by all state juvenile justice systems.OJJDP publishes the weighted data in its annual report entitled Juvenile Court Statistics. Using NCJJ’s data files (the National Juvenile Court Data Archive), we developed statistics for a 6-year period from calendar years 1986 to 1991.More specifically, we developed national estimates of the gender-specific probabilities of detentions, adjudications, and out-of-home placements for petitioned status offenders by offense categories for the 6-year period and annually. Our comparative analyses of NCJJ data have some significant limitations. For example, the NCJJ data did not represent the universe of status offenders. Rather, the data included only those status offenders who were petitioned to or otherwise handled more formally by the juvenile courts. Thus, the data did not include status offenders who were picked up, counseled, and/or released by the police. Nor did the data cover those juveniles who received informal dispositions from county juvenile department officials during intake screening. For example, intake officials may counsel and release the juveniles or divert them to social service agencies. Another significant limitation of our national-level analyses is that any differences in the resulting frequency and probability statistics (comparing female and male status offenders in reference to detentions, adjudications, and placements) cannot be used to draw interpretations or conclusions about either the presence or the absence of gender bias. For the purposes of our review, we defined gender bias as differences in juvenile justice systems’ outcomes (intake decisions, detentions, adjudications, and placements) of female and male status offenders who had similar characteristics, such as age, status offense, and offense history. Thus, because NCJJ’s national data files contained insufficient information on prior offense histories and other variables relevant to judges’ decisions in the cases, we could not use our national-level analyses to draw interpretations or conclusions about gender bias. Despite these limitations, the national-level frequency and probability statistics provide a useful overview regarding petitioned status offenders. While NCJJ’s national data files did not contain sufficient information for directly analyzing gender-bias issues, some of the Center’s state-specific files did have a wider range of variables (including prior offense histories) to permit such analyses. For example, in addition to gender and type of status offense, some of the variables relevant to our analyses were: the age of the youth at the time of referral to the juvenile justice system, the outcome or finding of the adjudicatory hearing, and whether the youth had any previous referrals and/or adjudications. Thus, to conduct more detailed analyses of intake decisions, detentions, adjudications, and placements, we selected the following 7 states from the total of 25 states that provide data to NCJJ: Arizona, California, Florida, Missouri, Nebraska, South Carolina, and Utah. In addition to geographical coverage, we considered the following factors in selecting these seven states. The states’ juvenile justice systems reflected a diverse range of processes for handling youthful offenders. The states’ data files contained a sufficient number of relevant variables to permit construction of models to test the respective state’s juvenile justice system for indications of gender bias in the handling of similarly situated female and male status offenders. For each of the seven states selected, we obtained a copy of NCJJ’s computerized data files for calendar years 1990 and 1991, the most recent years for which consistent data were available. Then, using the 1990 and 1991 data files for all status offenders, we constructed logistic regressionmodels for intake decisions. We used a class of models commonly used in criminological research to analyze these types of outcomes. For petitioned status offenders, we constructed logistic regression models to test for gender-based differences (if any) in three other aspects of juvenile justice processing. These models contained variables to measure offenders’ characteristics. First, we tested how the characteristics affected the probabilities associated with female and male status offenders’ being detained before adjudication. Second, we tested how the characteristics affected the probabilities, by gender, of being formally adjudicated as a status offender. Third, we tested how the characteristics affected the probabilities of females’ and males’ receiving placement as a final disposition. However, we could not address possible gender bias elsewhere in the juvenile justice system because data did not exist. For example, the data did not include youths who were handled informally—that is, picked up, counseled, and/or released by the police or by county juvenile department intake officials. Appendix III presents the results of our regression analyses of intake decisions, detentions, adjudications, and placements. To gain an understanding of the juvenile justice systems in the seven states in our analyses, we interviewed state officials in various jurisdictions within those states, including judges, prosecutors, and juvenile justice specialists. Those interviews covered many topics, including the referral process; the prosecution, adjudication, and disposition of juveniles; the juvenile justice systems in various jurisdictions; workload; and state laws as they related to the processing of juvenile offenders. To develop comparative information about the availability of facilities and services for female and male status offenders, we visited a total of nine counties—generally two counties (a rural county and an urban county) within each of four states (Florida, Kentucky, Maryland, and Texas). In judgmentally selecting these states, our primary consideration was that we wanted to visit juvenile justice jurisdictions that reflected various approaches for handling status offenders and/or provided a variety of facilities and services, including some facilities serving only females, some serving only males, and some serving both genders. Thus, in selecting states to visit, we first solicited suggestions from juvenile justice professionals with national or multijurisdiction experience. These professionals included, for example, OJJDP officials, as well as representatives of advocacy groups, such as the Coalition for Juvenile Justice and the National Girls’ Caucus. Following are more specifics regarding our reasons for selecting each state. Florida had began a process of privatizing services to status offenders by contracting with the Florida Network of Youth and Family Services, Inc., which operated residential shelters and nonresidential treatment and counseling sites throughout the state. Also, according to OJJDP officials, Florida had a female-specific program (the Practical and Cultural Education Center for Girls) that had received national attention. Kentucky, in 1986, had enacted legislation providing for informal processing of juveniles involved in less-serious offenses. These juveniles may enter into diversion agreements, which impose conditions such as community service, counseling, curfew, and restitution. Maryland tries to divert status offenders from the juvenile justice system into nonresidential counseling programs operated by youth service bureaus, which are private, not-for-profit organizations under contract with the state’s Department of Juvenile Services. Also, according to OJJDP officials, Maryland was one of only a handful of states that began planning for gender-specific services for juvenile offenders before such planning was required by federal legislation. Texas is a populous state with a relatively large number of juveniles. According to 1990 census data, 3 of the 10 most populous U.S. cities are in Texas. We visited Dallas and San Antonio, which we selected on the basis of our available staff. Generally, in deciding which counties to visit in each of the four states, a primary criterion we used was the relative volumes of status offenders referred to and/or detained by the local juvenile justice systems. We obtained referral and detention information by reviewing (1) periodic reports that county juvenile justice officials submit to the respective state’s office of the governor and (2) each of the states’ current 3-year plans submitted in conjunction with applications for formula-grant funding under the Juvenile Justice and Delinquency Prevention Act. Using these data sources and considering suggestions of state juvenile justice specialists, we selected one urban and one rural county to visit in each of the four states, except in Texas, where we selected two urban counties—Dallas County and Bexar County. We selected two urban counties in Texas because we wanted to contrast different approaches for dealing with status offenders. For example, Dallas County had a separate juvenile probation facility (the Letot Center) specifically designated for only status offenders, while Bexar County had no such separately designated facilities. Also, each county had one of the nation’s 10 most populous cities—Dallas and San Antonio. Generally, in each of the selected counties, we interviewed local juvenile justice officials (judges, law-enforcement officers, detention facility officials, and others) to obtain overview perspectives on the availability of facilities and services for status offenders. Also, we visited facilities that the state and local officials identified as having services or being placement options for status offenders. In total, we visited 15 facilities—4 detention facilities, 6 shelters, 4 group homes, and 1 nonresidential program. At the facilities, we obtained information about the capacity, or number of beds available; genders served by offense category; extent of overcrowding, if applicable; and average lengths of stay. Also, we toured the facilities to obtain information on available counseling, educational, and medical services—that is, the services most relevant to the principal needs of status offenders. In addition, we interviewed the service providers (the professional staff responsible for providing counseling, educational, and medical services) at each of the facilities to obtain views on the treatment needs of status offenders, including views on the need for gender-specific services. We did not verify the information facility officials gave to us, nor did we try to evaluate or compare services provided. The results of our visits cannot be projected to other counties and facilities within the respective states, and comparisons should not be made between states. We conducted a mail survey of county probation department officials nationwide to obtain their views on issues concerning gender bias. At our request, NCJJ gave us a list of all juvenile probation departments in the United States. NCJJ identified 1,410 officials whose titles indicated that they were the main officials in juvenile probation departments. Titles on the list included “chief probation officer,” “court services director,” and “court administrator.” We referred to all such individuals as “chief probation officers.” The list of 1,410 officials was developed by eliminating duplicates in counties listing more than one individual as the chief probation officer. NCJJ then selected a random sample of 500 such officials for our sample. Although we sent our survey to the individual listed, some questionnaires were actually completed by other individuals in their offices (see app. V). The survey was designed to (1) identify differences in relationship to gender in the juvenile justice system’s processing of status offenders and (2) obtain perspectives on the availability of facilities and services for status offenders. By using a national sample, we were able to project the results of our study to a population of 1,249 chief probation officers. We designed and pretested the survey in March and April 1994 and mailed it to the 500 randomly selected officials in May 1994. As needed, we made some follow-up inquiries by mail and/or telephone to help ensure an adequate response rate. We determined that 57 questionnaires had been sent to offices that did not handle status offenders; therefore, we eliminated these offices from our sample and adjusted the universe, accordingly. Our resulting study population was 1,249 chief probation officers, and our valid sample consisted of 443 such individuals. We received a total of 349 useable responses out of the 443 surveys mailed, for a response rate of 79 percent. All such samples are subject to sampling error. All percentage estimates noted in this report are within plus or minus 5 percentage points, using a 95-percent confidence interval, with the following exceptions. The following calculations either exceed the 5-percent range or are calculated using a Poisson distribution because of the small number of responses. All sampling errors reported here use the 95-percent confidence interval. Estimate referred to as “About 70 percent” on pages 3 and 15: 71.4 percent, sampling error is 5.8 percent. Estimate of 2.6 percent in table 3: confidence interval for the percentage is from 1.5 percent to 4.2 percent. Estimate of 6.6 percent in table 3: confidence interval for the percentage is from 5.0 percent to 8.6 percent. Estimate of 6.9 percent in table 4: confidence interval for the percentage is from 5.2 percent to 8.9 percent. Estimate of 1.4 percent in table 4: confidence interval for the percentage is from .7 percent to 2.8 percent. Estimate of .6 percent in table 4: confidence interval for the percentage is from .2 percent to 1.8 percent. Estimate of 2 percent on page 16: confidence interval for the percentage is from 1.1 percent to 3.5 percent. In addition to the reported sampling errors, the practical difficulties of conducting any survey may introduce nonsampling errors. For example, variations in the wording of questions, the sources of information available to the respondents, or the types of people who do not respond can lead to somewhat different results. We included steps in both the data collection and data analysis stages for the purpose of minimizing such nonsampling errors. For example, we pretested the survey on members of the target population. All returned surveys were manually edited, double-keyed, and verified for accurate data entry, and all computer analyses were checked by a second independent analyst. According to the National Center for Juvenile Justice (NCJJ) data, 500,620 status-offense cases were petitioned to juvenile courts in the United States during the 6-year period from 1986 to 1991. As mentioned in appendix I, because NCJJ’s national data files contained insufficient information on prior histories and other variables relevant to judges’ decisions in the cases, our national-level analyses cannot be used to draw interpretations or conclusions about either the presence or the absence of gender bias. Of the total petitioned status-offense cases, 206,756 cases (41.3 percent) involved females and 293,864 cases (58.7 percent) involved males. These proportions were fairly consistent across the 6 years. (See tables II.1 and II.2.) In terms of gender distinctions, two specific offense categories with noticeable differences in the frequency (number) of female and male status-offense cases petitioned to juvenile court were running away and liquor offense. Running away appeared to be a predominantly female category. For the 6-year period shown in tables II.1 and II.2, females were involved in 61.9 percent of the total 83,000 petitioned running away cases, and males were involved in the other 38.1 percent. In contrast, liquor offense appeared to be a predominantly male category. Of the total 156,317 petitioned liquor offense cases during 1986 through 1991, males were involved in 74.3 percent of the cases, and females were involved in the other 25.7 percent. During 1986 through 1991, of the total 500,620 status offense cases petitioned to juvenile courts, 10.7 percent (53,748 cases) involved secure detention of the alleged offender before disposition. Of the total detention cases, 43.4 percent (23,326 cases) involved females and 56.6 percent (30,422 cases) involved males. Table II.3 presents the results of our probability analyses regarding the 53,748 cases involving secure detention during 1986 through 1991. Generally, the probabilities, or percent chances, for females and males within each respective offense category were similar. For example, during the 6-year period shown, a female status offender petitioned for a liquor offense had a 4.96-percent chance of being detained, compared with a 6.37-percent chance for a male offender. For most offenses, the probability of being detained decreased for both males and females between 1986 and 1991. For example, the probability of female runaways’ being detained decreased from about 33 percent in 1986 to about 13 percent in 1991; for males, the percentage dropped from 38 percent to 23 percent. During 1986 through 1991, of the total 500,620 status-offense cases petitioned to juvenile courts, 62.0 percent (310,363 cases) were formally adjudicated as status offenders. In these 310,363 cases, the adjudicatory hearings resulted in formal findings or determinations of status-offense conduct. Of the 310,363 adjudicated cases, 40.3 percent (124,923 cases) involved females and 59.7 percent (185,440 cases) involved males. Table II.4 presents the results of our probability analyses regarding the 310,363 adjudicated cases during 1986 through 1991. Generally, the adjudication probabilities for females and males within each respective offense category were comparatively similar. For example, during the 6-year period shown, a female status offender petitioned for a liquor offense had a 57.34-percent chance of being adjudicated, compared with a 59.69-percent chance for a male offender. During 1986 through 1991, of the total 310,363 adjudicated status-offense cases in the United States, 18.3 percent (56,725 cases) resulted in out-of-home placement dispositions for the offenders. Of these 56,725 cases, 42.4 percent (24,077 cases) involved females and 57.6 percent (32,648 cases) involved males. Table II.5 presents the results of our probability analyses regarding the 56,725 out-of-home disposition cases during 1986 through 1991. Here again, the probabilities, or percentage chances, for females and males within each respective offense category were comparatively similar. For example, during the 6-year period shown, a petitioned female status offender adjudicated in the running away category had a 31.25-percent chance of receiving an out-of-home disposition, compared with a 34.68-percent chance for a petitioned male. This appendix describes our research to measure gender bias in the case processing of status offenders in four juvenile justice system outcomes. These outcomes were: (1) the intake decision to petition status offenders to juvenile court versus the decision to handle them informally; (2) the decision to detain petitioned status offenders securely prior to an adjudicatory hearing; (3) the outcome of an adjudicatory hearing; and (4) the decision to place adjudicated status offenders out-of-home in secure or nonsecure placements. We analyzed 1990 and 1991 juvenile court data from up to seven states or counties within selected states for each of the four outcomes. We measured gender bias in these four outcomes as the discrepancy or gap between females’ actual outcomes and the outcomes that they would have received had they been treated as males were treated. More specifically, we used juvenile court case-level data to estimate gender-specific logistic regression equations of the relationships between each of the four outcomes and case characteristics. That is, for female and male status offenders, we estimated separate regressions for whether (1) a case was petitioned at intake, (2) a case petitioned at intake was detained, (3) a petitioned case was adjudicated, and (4) an adjudicated case was placed out-of-home. We included as independent or explanatory variables in our regressions three types of case characteristics. These characteristics were: (1) offense-related characteristics, such as current offense and prior offense history; (2) justice-system variables, such as the source of referral to the juvenile court, the location of the court, and, for the adjudication and placement outcomes, whether the case was detained during its processing; and (3) offender characteristics, such as age and race. The variables in the final models were selected from a broader set of variables using appropriate statistical techniques. The broader set of variables was identified from the literature on gender bias, but it was limited to those variables actually available in a given state’s database. We estimated the separate logistic regressions by gender, to derive gender-specific estimates of the juvenile justice systems’ treatment of females’ and males’ characteristics. We took these estimates of the systems’ treatment of males’ characteristics and applied them to females’ average characteristics to predict females’ outcomes if their characteristics were treated equal to males’. We defined as gender bias the gap between these two sets of outcomes—i.e., those models predicted for females versus those that we estimated would have occurred had females been treated as males. In general, we found that females received outcomes that were similar to the ones they would have received if their average characteristics had been treated like males’ characteristics. In only 6 of the 25 models, across the 4 outcomes in the 7 states that we analyzed, did we find outcomes that we characterize as evidence of gender bias. Across states, but within case-processing outcomes, we found some similarities and some differences in the variables that were associated with the outcomes. For example, prior offense history tended to be strongly and positively associated with each of the four outcomes across the states (that had variables measuring prior offense history). However, the effects of other characteristics on particular outcomes were not consistent across states. For example, whether a case was referred to the courts by law-enforcement agencies was positively associated with the likelihood of detention in Arizona, California, and Nebraska; but it had no effect on the likelihood of detention in Florida, Missouri, and South Carolina. The models alone do not explain why these outcomes may happen. For example, the difference between states may be due to differences in police procedures, police practices, or laws. We found similarities and differences across the states in the characteristics of females and males who were processed by their juvenile justice systems. Across states, males tended to have more prior contacts with the juvenile justice system than females, and males also tended to be slightly more likely to be referred to intake by law-enforcement agencies than females. Also across the states, we found gender differences in the types of offenses for which status offenders were referred to juvenile courts. There tended not to be differences between males and females on the basis of age and race. Finally, within and across states and outcomes, we found some gender differences in the courts’ treatment of individual characteristics. Specifically, we found cases in which variables had opposite effects on the likelihood of an outcome for females than on the likelihood of that same outcome for males. For example, in California, females referred to the court by law-enforcement agencies were less likely to be petitioned to juvenile court than females referred by other sources; however, males referred to the courts by law-enforcement agencies were more likely to be petitioned than males referred by other sources. In general, however, the direction of the effects of variables were consistent between the females’ and males’ equations. That is, the same variables that increased or decreased the likelihood of a particular outcome for females also tended to increase or decrease the likelihood of that particular outcome for males. In addition, we had cases in which a variable influenced an outcome for one gender, but not the other gender. We analyzed calendar year 1990 and 1991 juvenile court case-level data for up to seven states for each of four case processing outcomes. The outcomes were (1) whether a case was petitioned by intake staff, such as juvenile probation officers, to juvenile court for more formal handling or hearing by a judge; (2) whether a case petitioned to juvenile court was detained before its formal hearing; (3) whether petitioned cases were adjudicated as status offenders; and (4) whether adjudicated cases were placed out-of-home. Table III.1 reports the number of cases used in the analysis for each stage. Table III.2 reports the proportion of female cases in each stage. The number of cases referred in table III.1 represents the total sample of cases coming into the juvenile justice systems in each state, that is, cases referred from law-enforcement officers, schools, family, social service agencies, and other sources. From the cases referred, a subset is petitioned at intake to juvenile court (the number petitioned). Of those petitioned, a subset is detained (the number detained), and a subset is adjudicated as status offenders (the number adjudicated). Finally, of those cases adjudicated, a subset is placed out-of-home (the number placed). The data in table III.1 show that the number of cases referred to the respective juvenile justice systems ranged from almost 41,000 in Missouri to about 8,700 in the 5 California counties. The number of cases processed at each of the other stages—detention, adjudication, and placement—also varied across the states. Table III.2 shows the proportion of females at each stage for each state. These proportions varied by outcomes and states. For example, in Utah, about 30 percent of the cases referred to the juvenile courts were females, whereas, in South Carolina, about 49 percent of the cases referred were females. Similar ranges and variability across the states occurred in other stages of processing. In table III.3, we report the gender-specific aggregate probabilities for each of our four decision points by state. The following probabilities were defined: the probability of being petitioned at intake equals the number of cases petitioned to juvenile court divided by the number referred to the intake office, the probability of secure detention equals the number of petitioned cases detained securely divided by the number of petitioned cases, the probability of adjudication equals the number of petitioned cases adjudicated as status offenders divided by the number of petitioned cases, and the probability of placement equals the number of adjudicated cases receiving an out-of-home placement divided by the number of adjudicated cases. As in table III.1, table III.3 shows that there was a wide variability across states in the probabilities at each stage. There were also gender differences within states in the probabilities at particular stages. For example, the probability of being petitioned at intake to juvenile court for females ranged from about 11 percent in Arizona, California, Florida, and Missouri to about 42 percent in South Carolina. Within states, there were gender differences in the probability of being (1) petitioned at intake, in Arizona and Utah; (2) detained, in Arizona and California; (3) adjudicated, in Nebraska; and (4) placed, in Arizona and Nebraska. Alone, differences in these aggregate probabilities did not reveal gender bias. The probabilities did not account for gender-specific differences in the distribution of case characteristics that were associated with each of the outcomes. The presence or absence of gender differences in the probabilities may mask gender differences in case characteristics or gender differences in the manner in which the respective juvenile justice systems treated the characteristics. Gender differences in the treatment of characteristics could lead to gender bias in outcomes. For example, the absence of a large gender difference in the probability that cases were petitioned at intake to the court in Missouri (.1195 for males as compared to .1150 for females) could mask gender bias or gender differences in treatment. If, for example, intake offices in Missouri were more likely to petition male liquor offenders than female liquor offenders, but female liquor-law violators comprised a larger portion of the sample of female cases, then the aggregate probabilities of being petitioned at intake may mask the difference in treatment on similar characteristics. We measured gender bias as the gap or discrepancy between females’ outcomes as determined by their average characteristics and females’ outcomes under the assumption that their average characteristics were treated the same as males. We devised a measure—the gender-bias quotient—to summarize the degree to which these two sets of outcomes differed. The gender-bias quotients were developed from the results of the gender-specific regressions of each of the four outcomes. In general, we estimated separate models for females and males using the case characteristics as predictors or independent variables. Upon estimating the regressions, we produced parameter estimates of the influence on a dependent variable of each of the independent variables. For each outcome, we had two sets of parameter estimates, one for females and one for males. We used the parameter estimates and the case characteristics for females and males to construct the gender-bias quotients. To do so, we calculated two sets of predicted average probabilities for females. The first predicted probability we called the “model probabilities.” These were the predicted average probabilities for females for each outcome, e.g., the probability of being petitioned to juvenile court. The model probabilities were calculated using the mean or average characteristics of females in the sample. To compute the model probabilities, we multiplied the female parameter estimates for each independent variable by the respective means of the independent variables for females. We summed across these products and transformed the result into a probability to produce the model probabilities. The second probability we calculated was the “equal treatment” probability. We followed a similar procedure as above. However, in this case, we multiplied the parameter estimates for males by the average characteristics of females, summed the products, and transformed the result into the “equal treatment” probabilities. The ratio of the equal treatment to the model probabilities yielded the gender-bias quotient. The gender-bias quotient measures the extent to which females’ outcomes diverge from males’ outcomes if case characteristics were treated equally. The gender-bias quotient is an aggregate measure in that it is produced by summing across the effects of different variables. It is possible, therefore, that the aggregate gender-bias quotients may show little or no gender bias, but that there may be gender differences in treatment on particular variables. The results of our regression analysis enabled us to identify situations where there were differences in treatment on particular variables but no aggregate gender bias, as measured by the gender-bias quotients. Further, the method we used to construct the gender-bias quotients takes into account two sets of influences on each of the four case-processing outcomes. The first influence is the differences in the average characteristics of female and male status offenders across all cases. The second influence is the differences in how females’ and males’ characteristics were treated. Discrepancies between the two sets of predicted probabilities that comprise the gender-bias quotients arising from the first set of influences are not indicators of gender bias; those discrepancies arising from the second set are indicators. The distinction between these influences stems from the fact that the outcomes we reviewed—petitioned at intake, detention, adjudication, and placement—may be determined by a number of variables, such as current offense, prior offense history, age, and race. If some variables had larger influences on these outcomes than others and the variables with larger influences were correlated with gender, then there would be gender differences in these outcomes. Such differences would not be characterized as gender bias, however, because they are explained by the gender differences in the distribution of case characteristics. Failure to control for gender differences in case characteristics may lead to the incorrect inference that there is gender bias in the outcomes, when, in fact, what has been observed is gender differences in the distribution of variables associated with outcomes. On the other hand, estimated differences in the way the juvenile justice system evaluates females’ and males’ characteristics, apart from the distribution of these characteristics across cases, would indicate gender bias. That is, differences in the magnitude or direction of the influence of variables between females and males, regardless of the distribution of these variables between females and males, indicate that there is gender bias. For example, suppose, regardless of gender, that the probability of being detained before adjudication increases with the number of prior contacts with the juvenile justice system. Everything else being equal, if a larger proportion of the sample of males had prior contacts, or if males had more prior contacts on the average than did females, then one would expect the probability of detention to be higher for males than females. This type of result would not indicate gender bias. However, if males had as many prior contacts with the juvenile justice system as females, but males with prior contacts were more likely than females with prior contacts to be detained, all else being equal, then gender differences in the probability of detention arising from this situation would indicate gender bias. The methodology we employed enabled us to distinguish between these two sources of influences on the outcomes we analyzed. We were able to (1) evaluate the extent to which the distribution of characteristics differed between females and males and (2) measure whether there were gender differences in the juvenile justice systems’ treatment of these characteristics. To assess gender bias we estimated separate regressions for females and males for each of the four decision outcomes in the seven states. We fit the regressions on a state-by-state basis using variables that measured case characteristics in each state’s data set. We imposed as few restrictions as possible on our representations of each state’s juvenile justice system; in other words, each state’s regressions may have had a different number of variables. The four dependent variables in our analysis—whether a case was petitioned to juvenile court, detained, adjudicated as a status offense, or placed out-of-home—were dichotomous. Our ultimate interest was in the gender-specific probabilities of status offenders’ being petitioned at intake, detained, adjudicated, and placed. This posed two problems. First, the dichotomous dependent variables violated the assumptions underlying the classical, or linear, regression model. Specifically, the errors were heteroskedastic. Second, we wanted to use the regression results to predict aggregate, gender-specific probabilities for our outcomes, rather than simply predict the outcomes in individual cases. The problems posed by the nature of the dependent variables and the need to estimate probabilities were solved by using a logistic specification for the regressions. This specification is commonly chosen by criminologists who analyze data containing dichotomous outcomes, such as whether a case was convicted. Using a logistic specification to estimate the parameters, we took the following three steps to estimate parameters and calculate predicted probabilities. First, by state, we estimated the separate regressions for each of the four dependent variables. We included specific variables in the regressions by assessing the adequacy of the models both in terms of the individual variables and from the point of view of the overall fit of the model to the data. In general, we sought to build the most parsimonious models consistent with the data, but we also attempted to include theoretically relevant variables—such as the type of status offense—where possible. p elogit(p)/(1 elogit(p)) In this formula, “p” is the predicted probability of an outcome, e.g., the probability of petitioned at intake; “e” refers to the operation of exponentiating; and “logit(p)” is the estimated logit of the probability of the particular outcome. The logit was evaluated at the mean levels of the variables in the regression equation. These probabilities, the model probabilities, indicated how females and males, respectively, were treated by the courts on their average characteristics. Third, we used the parameter estimates from the males’ equations to estimate outcomes for females if they were treated in the same way as males. These probabilities were labeled “p(2)” or the “equal treatment probabilities” for females. We computed these probabilities by multiplying the means of the females’ variables by the parameter estimates from the males’ equations. We used these products to predict the equal treatment probabilities for females. Finally, we took the ratio of the two sets of probabilities—“equal treatment” to “predicted,” or p(2) to p(1)—to estimate the gender-bias quotient. As the gender-bias quotients approach 1, the amount of gender bias diminishes. Gender-bias quotients greater than 1 indicate that females were less likely to receive a particular outcome than if their characteristics were treated as males’ characteristics. Gender-bias quotients less than 1 indicates the reverse, that females were more likely to receive an outcome than if their characteristics were treated equally to males’ characteristics. For example, a hypothetical outcome of .7 detentions in a state would suggest that females were more likely to be detained in that state than males with similar characteristics; an outcome of 1.3, on the other hand, would indicate that females were less likely to be detained than males with similar characteristics. The general form of our logistic regressions was as follows. If we denote any one of the dichotomous dependent variables, for example, detention by D, then the probability of detention, conditioned on a vector of case characteristics X and a vector of effects B, is given by P(1)(D 1|XB) [1 exp( BThe case characteristics included in the models included variables that measured offense history, current offense, etc., as described before. The entire set of variables used in building the models is reviewed below. Because we estimated separate models for females and males, the parameters indicate the gender-specific treatments of each gender’s characteristics by the courts. To estimate the separate logistic regressions, we used maximum likelihood techniques and obtained the estimated effects for females and males. From these, and the mean values of the independent variables, we calculated the estimated probabilities. Continuing with the example, we calculated p(1) as the probability of detention for females. We then calculated a second probability of detention, the equal treatment probability, or p(2): p(2) [1 exp( Bm i)] the parameter estimates, odds ratios can be calculated. The odds ratios can be interpreted in a relatively straightforward manner. The odds ratio is an estimate of how much more likely, or unlikely, it is for the outcome of interest to be present among those having a particular characteristic than those not having that characteristic. For example, an odds ratio of 4 for a variable indicating whether a status offender had prior dispositions would be interpreted to indicate that status offenders with prior dispositions are four times as likely as those without prior dispositions to have the outcome (e.g., detention) of interest. Finally, the method we used to estimate gender bias was based on methods developed by economists to measure discrimination in labor markets. Their method, called the “residual difference,” measures discrimination, or bias, in terms of the differences between the two sets of outcomes after the effects of all relevant variables have been accounted for. In the residual difference method, a bias or discrimination is the residual that cannot be explained by the variables in the model. The strength of the method lies in its ability to account for bias in terms of the differences in treatments on characteristics. The major weakness of the method lies in using an incomplete or incorrect set of variables to estimate the regressions. Depending on how they are correlated with the outcome variables, omitted variables or incorrectly included variables could reduce or increase the “residual difference.” Thus, misspecified models could lead to incorrect inferences about bias. We fit state-specific models for each decision, using the relevant variables available in the states’ data sets. We used our knowledge of each state’s system to supplement our model-building. In general, we used five common categories or classes of independent variables to build our models. These categories included (1) variables to measure the current offense, prior offense history, and juvenile justice system contact, such as source of referral for the current offense, detention prior to adjudication, and personal attributes, such as age and race, and (2) variables to measure the location and geographic characteristics of the court. We defined our four dependent variables as follows: Intake decision: A dichotomous variable to indicate whether a status offense case referred to a juvenile court’s intake office was petitioned to the court for formal processing. Detention: A dichotomous variable to indicate whether a petitioned status offender was detained securely before adjudication. Adjudication: A dichotomous variable to indicate the outcome of an adjudicatory hearing, specifically, that a case was adjudicated as a status offender. Placement: A dichotomous variable to indicate whether an adjudicated status offender was given an out-of-home placement. The specific variables that fell within the categories of our independent variables were as follows: Current offense: We used a set of indicator (dichotomous or dummy) variables to indicate whether the current offense, i.e., the referral offense, was for running away, truancy, ungovernability, liquor-law violations, or other status offenses. Prior offense history: We used a number of measures of prior offense history, including the number of prior juvenile court referrals for any offense over the life of the juvenile, the number of prior status-offense dispositions during the 2 years before the current referral, the number of prior delinquency offense dispositions during the 2 years prior to the current referral, and the number of prior delinquency adjudications over the life of the juvenile. Not all measures were available for each state. Source of referral: We used a set of dummy variables to indicate the source of referral. The variables for sources of referral included the law-enforcement agency, school, family, and other sources. We varied the reference category by state. Age at referral: We used the age of the status offender at the time that the case was referred. Race of the offender: We used two dummy variables to indicate whether a status offender (1) was black or (2) belonged to another race or ethnic group. Metropolitan status of the court of venue: Except in California, we measured the metropolitan status of the court by a dummy variable to indicate whether a court was located in a county belonging to a metropolitan statistical area or a primary metropolitan statistical area. We also measured the population density per square mile of the county containing the court. Detention status: For the adjudication and placement decisions, we used a dummy variable to indicate whether a case was detained securely. We fit the models to the data on a state-by-state and outcome-by-outcome basis. We developed models containing a set of independent variables that fit the data better than other combinations of independent variables in a state’s data set. Across states, our models did not necessarily contain the same subset of variables. As a result, we were not able to directly compare the size of the effects of different variables across states; although we did attempt to identify which variables in each state’s models had the biggest effects and to make general comparisons about the effects of variables. For the petitioned at the intake decision, we used the sample of all status-offender cases referred to the intake office in a state. We did not estimate a model of the intake decision for Nebraska because data on cases handled informally were not reported for the state’s two largest counties. For the detention and adjudication outcomes, we used the sample of all cases handled formally or petitioned to the juvenile court. We did not estimate a detention model for Utah because its data set did not contain measures of detention. For the detention and adjudication outcomes, we also measured the current status offense as the referral offense. In estimating the placement outcomes, we restricted our analysis to those status-offense cases adjudicated as status offenses. For the placement outcomes, we measured the offense as the disposed offense. We did not estimate placement models for status offenders in Arizona because there were too few cases. Our findings on gender bias are summarized on table III.4. A discussion of our results pertaining to the analysis of the differences in the effects of individual parameters and of offsetting effects follows the discussion of the gender-bias quotients. Table III.4 shows, by gender for each state, three results for each of the dependent variables: (1) the “models probability” of having been petitioned at intake, detained securely before adjudication, adjudicated as a status offender, and placed out-of-home, or p(1), for females and males; (2) the “equal treatment probability” of the same outcomes for females, or p(2); and (3) the gender-bias quotient, or ratio of probabilities for females if treated like males, to females as predicted by the model—i.e., p(2) to p(1). In analyzing the gender-bias quotients, we were interested in whether the aggregate outcomes for females differed from what they would have been if their characteristics were treated equally to males. If there were differences, as indicated by gender-bias quotients that deviated from 1, then we wanted to determine which variables in the models explained the differences, as previously discussed. Of secondary concern were those cases in which the gender-bias quotients were not different from 1, but there were differences in the treatment of specific characteristics between females and males. In most of the outcomes we analyzed across the seven states, there was little evidence of widespread gender bias. In other words, for most of the outcomes, the gender-bias quotients were near 1. This was the case in five of the six petitioned-at-intake decision models, four of the six detention models, six of the seven adjudication models, and five of the six placement models. In other words, across a diverse set of states, which represented different types of juvenile justice systems, females and males tended to receive similar treatment. The exceptions to this general finding occurred in the following decision points: (1) in petitioning-at-intake decisions, females in Florida were estimated to be more likely to be petitioned to juvenile court than if they were treated equal to males; (2) in detention decisions, females in Arizona, Florida, and Nebraska were estimated to be less likely to be detained than males; (3) in the adjudication decision, females in Florida were estimated to be less likely to be adjudicated than males in that state; and (4) in the placement decisions, females in Florida were estimated to be less likely to be placed than males in Florida. In addition, while only Florida’s placement outcome deviated by more than .2 from a gender-bias quotient of 1, in two other states, Nebraska and South Carolina, the gender-bias quotients for the placement decisions were .80. In addition, in two other states, Missouri and Utah, the gender bias quotients were less than 1 and near .8. Overall, in four of the six states where placement data were available, the gender-bias quotients for the placement decisions were less than 1. While only the result for Florida was consistent with our definition of gender bias, in these other four states, there appeared to be a slightly higher likelihood for placing females out-of-home as compared to similarity situated males, but the magnitude of the effect in any of these four states was not large enough to lead us to conclude that there was significant gender bias. The odds ratios from the parameters of the regression models provided some insight into the reasons for gender bias in the cases identified above. In the petitioning decisions in Florida in which females were more likely to be petitioned to juvenile court than in their equal-treatment outcomes, the gender differences in treatment arose around female runaways and in the location of the juvenile courts. Female runaways were more likely to be petitioned to court than male runaways; however, female runaways were less likely to be petitioned than female truants or liquor-law violators. In addition, female runaways comprised a larger portion of female cases than male runaways did of male cases. Females in metropolitan areas were about a third more likely to be petitioned than their male counterparts. Thus, the higher aggregate likelihood of females to be petitioned to juvenile court appeared to be due largely to differences in treatment of female runaways, who also happened to comprise a larger share of all female status offenders. In the detention decisions in which females were less likely to be detained than if they were treated like males, the gender differences appeared to arise from two different sources: the source of referral and the type of status offense (in the Arizona case), a variety of variables (in the Florida case), and the type of status offenders petitioned to the court (in the Nebraska case). In Arizona, petitioned females who were referred to the court by law-enforcement officers were one-tenth as likely to be detained than their male counterparts. In addition, male status offenders referred by law-enforcement officers comprised a larger proportion of the sample of all male status-offender cases than occurred among all female status-offender cases. Finally, female runaways were more likely to be detained than male runaways. In Florida’s detention outcomes, gender differences in treatment of characteristics occurred in a number of variables. Female runaways and liquor-law violators were less likely to be detained than males referred to juvenile court for these offenses, and females processed in metropolitan areas also were less likely to be detained than males. In Nebraska’s detention outcomes, the gender bias arose because of gender differences in the treatment of particular types of status offenders. In particular, females picked up for truancy, liquor, and other offenses were estimated to be less than half as likely to be detained than male truants. On the basis of their other characteristics, females and males were treated about equally. In Florida’s adjudication decision in which females were less likely to be adjudicated than their equal-treatment outcomes, the type of status offense was related to the gender bias. Specifically, female runaways were about three times less likely to be adjudicated than were male runaways, and females petitioned for liquor offenses were about one-fifth as likely to be adjudicated as males petitioned for liquor offenses. Finally, in Florida’s placement outcome, which had the gender-bias quotient that deviated the farthest from 1, and in which females were less likely to be placed than their equal-treatment outcomes, the type of status offense also seemed to be associated with the gender bias. Specifically, females adjudicated for liquor offenses, truancy, or ungovernability all were less likely to be placed than comparable males with these offenses. In addition, females adjudicated for liquor violations, truancy, and ungovernability were less likely to be placed than females adjudicated for running away. Finally, females’ prior offense histories were not treated as severely as males, that is, females with prior offenses were not as likely to be placed as males with prior offenses. The lower likelihood of placement for females in Florida does not necessarily mean that females were better off or that males were treated more harshly than females. To determine this, it would be necessary to determine the range of treatment options associated with various placements. For example, a concern expressed in our site visits related to the treatment options available or unavailable when status offenders were placed out-of-home. Placements may be used for a variety of purposes, including providing services and protecting females from becoming victims of abuse. This latter concern may be reflected by the fact that in Florida female runaways were more likely to be placed than other types of female status offenders. In addition to using the results of the regressions to explain the occurrences of gender bias, we analyzed the regressions to identify the variables that were associated with each of the outcomes. Although differences in the way variables were measured and in the way states processed status offenders prevented us from making direct comparisons between the states on each set of models, we did assess the magnitude of the effects of the variables to identify similarities and differences. Across the six states where intake data were available, no single variable had consistent effects on the decision to petition status offenders at intake, although prior contact with juvenile court generally increased the likelihood that a case would be detained. In four of the six states, the type of status offense for which females and males were referred to the courts did have a strong association with the likelihood that the cases were petitioned to the juvenile courts. Specifically, in California, Florida, and Utah, liquor-law violators and truants were estimated to be more likely, whether they were female or male, to be petitioned to the courts than other types of status offenders. In Arizona, this was true only for truants; moreover, black males were more likely to be petitioned to court than black females. In California and Missouri, blacks of either gender were more likely to be petitioned than persons of other races. Finally, in Arizona, California, Missouri, and South Carolina, the source of referral influenced the likelihood that a case was estimated to be petitioned at intake. In particular, in South Carolina, cases referred to intake by family members were estimated to be more likely to be petitioned for both females and males than were cases referred to intake by other sources. No variables had consistent effects across all seven states. However, when the measures of prior offense history—whether measured as prior referrals, adjudications, or delinquencies—were available in a states’ data set, the prior offense history tended to be positively associated with the likelihood of detention for both females and males. The only exception occurred in the effect of prior status-offense dispositions on the Arizona detention probabilities. For males in this case, the number of prior status-offense dispositions during the 2 years before the current offense decreased the probability of detention. Other variables that had large positive effects on the probability of detention included the source of referral and the particular types of status offenses. Specifically, cases referred by law-enforcement agencies were estimated to be more likely to be detained for both females and males in Arizona, California, and Nebraska. In Arizona and California, the gender more likely to be detained given that a case was referred by law-enforcement officials differed. In Arizona, males referred by the police were about 14 times more likely to be detained than females referred by the police. In California, females referred by the police were about 9 times more likely to be detained than males referred by the police. In South Carolina, females referred to the court by family members or by schools were estimated to be more likely to be detained than males referred by those sources, and females referred by family members and schools were more likely to be detained than females referred by other sources. Female runaways were estimated to be more likely to be detained than other types of status offenders in Arizona, Florida, and Nebraska. On the other hand, in Florida, male runaways were more likely to be detained than were female runaways. Demographic variables, such as age and race, did not exhibit consistent effects on detention outcomes across states. However, in three states, race was associated with the likelihood of detention, and the effects of race varied with gender. Specifically, in Arizona, black females were more likely to be detained than black males; conversely, in Florida and California, black males were more likely to be detained than black females. In Nebraska, blacks—female or male—were more likely than whites to be detained. Adjudication outcomes for females and males tended to be affected most by three variables: detention, source of referral, and type of status offense. In general, detention before adjudication lowered the estimated probability of adjudication. The estimated direction of the effects of law-enforcement agencies as a source of referral tended to change between the detention and adjudication decisions. Law-enforcement referrals were estimated as more likely to be detained but less likely to result in cases’ being adjudicated as status offenders. Further, this change in the direction of effects between detention and adjudication also occurred for status offenders who were referred for running away. Runaways, in general, were estimated as less likely to be adjudicated than liquor-law violators; this was despite the fact that runaways were estimated to be more likely to be detained than liquor-law violators. These opposing effects between the two stages of the process may indicate that the juvenile courts use detention and adjudication in different ways. It is possible that detention may be viewed as analogous to a disposition for status offenders. The court may view detention as a sufficient treatment, given that a youth was warned or counseled about his behavior, and the court may not view additional sanctions as necessary. The effects of running away may also be explained in this manner. Runaways may be more likely to be detained to give officials time to contact the family and return the juvenile. These cases then may be less likely to be adjudicated because the juveniles would have been returned to their families. Other variables included in the models did not exhibit similar general trends across the states. For example, a prior offense history increased the probability of adjudication in three states, and metropolitan status decreased the probability of adjudication in three states. The effects of age and race were not consistent for females and males. These other variables may not have had a statistically significant effect on the adjudication outcome, or they may have had a statistically significant effect for females or males but not both, or the direction of the effects may have varied across states. In addition, the size of the effects of these variables was small, raising doubts about their overall impact on adjudication outcomes. The variations in the patterns for these variables attest to the differences in the states’ processes. With the exception of a prior offense history and the type of offense, the relationships between the independent variables and the placement outcomes were similarly difficult to characterize between the female and male equations and across the six states’ models. A prior offense history for example, was positively associated with the likelihood of placement for both females and males in four of the six states where placement data were available, while in a fifth state, a prior offense history was positively related to placement for males but not statistically significant for females. Other variables, such as the source of referral and type of status offense, were associated with the likelihood of placement, but the particular source of referral and type of offense that affected placement varied across states. For example, in Missouri, cases referred to the court by the law-enforcement agencies and schools were less likely to be placed regardless of their gender than cases referred by other sources. However, in Nebraska, males referred by family members were less likely to be placed than males referred by other sources, while females referred by family members were more likely to be placed than females referred by other sources. Alternatively, in South Carolina, males whose cases were referred by schools and family members were more likely to be placed, while females whose cases were referred by these sources were less likely to be placed than females referred by other sources. The placement outcomes for runaways were similar to those of the adjudication of runaways. In the states in which the type of status offense was associated with placement, runaways were less likely to be placed than other types of status offenses. Otherwise, the status offenses more or less likely to be associated with placement outcomes varied across the states. Finally, when the case processing outcomes within states were analyzed, there were a few variables that had consistent effects across outcomes within states. For example, in South Carolina, the source of referral influenced each outcome—particularly when the source was school or family. Cases referred by family members, regardless of gender, were more likely to be petitioned to court than cases referred by other sources. At the detention stage, the effects of source of referral varied with gender. Males referred by schools and family were less likely than females referred by these sources to be detained. Conversely, at the placement stage, males referred by schools and family were more likely to be placed than females referred by these sources. Second, in some states, the type of status offense influenced the outcomes, but the effects differed. For example, in California, truants—regardless of gender—were more likely to be petitioned at intake than liquor-law violators or runaways. However, at the adjudication and placement stages, male runaways were less likely to be adjudicated or placed than female runaways; but both female and male truants were about equally less likely to be adjudicated or placed than other types of status offenders. In general, the effects of specific variables differed across states and stages of processing. These differences may be due to differences among the states in the structure or objectives of juvenile courts. Generally, we found no significant gender-based differences in the counseling, educational, and medical services provided to females and males at the 15 facilities we visited, although the extent of such services varied by type of facility. However, a majority of the juvenile justice officials and all of the service providers in the counties we visited said that more facilities and services were needed for status offenders, both females and males. Table IV.1 presents background data about each of the 15 facilities we visited—4 secure detention centers, 6 shelters, 4 group homes, and 1 nonresidential program. We did not make any comparisons or evaluations regarding the quality of services for these facilities. State and county (urban or rural) Duval County (urban) Alachua County (rural) Fayette County (urban) Jefferson County (urban) Group home, residential, nonsecure To provide an alternative to incarceration or institutionalization for troubled girls by offering them academics, independent life skills training, counseling, and goal setting. (The Center also accepted dependent, pregnant, or parenting girls.) To provide for the safety, care, and custody of juveniles from the time they are detained until their cases are processed through the juvenile court. (According to facility officials, under Florida law, status offenders who are the subject of a judicial order requiring detention can be placed in a juvenile detention center.) To provide shelter and counseling to runaway and homeless youth. To provide temporary shelter and counseling to runaways to help them and their families resolve their conflicts. To provide a temporary, out-of-home placement alternative for children when secure detention is not appropriate. (The Coleman House also provided services to dependent, abused, and neglected children.) To provide for the care and custody of youth pending their release by the juvenile court. (According to facility officials, under Kentucky law, status offenders who violate a court order can be placed in a secure juvenile detention or holding facility.) To provide court-ordered residential placement, which includes counseling and educational services, and promote a positive change in the girls’ negative behaviors. (Ungovernable behavior was the most common status offense referral.) To provide court-ordered residential placement, which includes counseling and educational services, and promote a positive change in the boys’ negative behaviors. (Truancy was the most common status offense referral.) (continued) State and county (urban or rural) Johnson County (rural) Prince Georges County (urban) St. Mary’s County (rural) Bexar County (urban) Dallas County (urban) Shelter, residential, nonsecure To provide for the care and custody of youths pending their release by the juvenile court. (According to facility officials, under Kentucky law, status offenders who violate a court order can be placed in a secure juvenile detention or holding facility.) To provide short-term residential shelter, including assessment, counseling, and educational services, for runaway, homeless, or abused youth. To provide a protective, temporary living arrangement and counseling to runaways to help them resolve the problems in their homes. To provide a secure, temporary facility for juveniles waiting to appear in court or until placements can be arranged. To provide specialized clinical services, including individual and group counseling, for females. (The Program also provided services to dependent females.) To provide substance abuse treatment and rehabilitation services to medically indigent youth. To divert status offenders from juvenile detention, reunite them with their families whenever possible, and prevent them from committing more serious offenses and progressing further into the juvenile justice system. The four secure detention centers held females and males for short terms in physically restrictive environments pending juvenile court action. Staff at the four detention facilities told us that the majority of the youth held were males. In addition, the detention officials reported that most females and males detained at the detention facilities were delinquent offenders, not status offenders. Staff at three of the four detention facilities reported having problems with overcrowding caused by too many referrals of female and male youth. For status offenders held over 24 hours, the detention facilities’ staff reported that the average length of stay ranged from 7 days to 30 days. Also in the four secure detention facilities, female and male status offenders could be placed in the same living areas with the more serious offenders. These serious offenders included delinquents who may have committed homicide, sexual assault, robbery, or aggravated assault. Staff told us the youth placed in the facilities were separated primarily by gender because the detention facilities were generally overcrowded or had limited bed space. After gender, one detention facility considered the youths’ physical sizes and ages in making placement decisions within the female-only and the male-only living areas. For example, the younger, smaller males were not placed in the same living area with the older, larger males. Staff at another detention center told us, however, that they had no flexibility beyond gender in placing females because the facility had only one living area for females, whereas there were six living areas for males.Staff said that since most of the referrals received at the facility were delinquent males, only one living area was set aside for females. Of the 11 nonsecure facilities, the 6 shelters provided short-term care to females and males. Staff at the six shelters told us the majority of youth served were status offenders. At five of the six shelters, staff reported serving more females than males. Staff at the remaining shelter reported serving, on average, an equal number of female and male youth. Staff at two shelters also said that the shelters sometimes experienced overcrowding caused by too many female and male referrals, especially during the months that the local schools were in session. According to staff at the six shelters, the reported average lengths of stay for female and male status offenders ranged from 4 days to 45 days. Gender was the primary factor in determining living arrangements at the six co-educational shelters. Female and male status offenders were not commingled with serious juvenile offenders because the shelters served only status offenders, less serious delinquent offenders, and dependents. Of the other five nonsecure facilities, the four group homes provided long-term care with access to community resources and programs. Three of the four group homes served only females, and one served only males. Staff at the two group homes in Texas told us the majority of the females served were status offenders and/or dependents. The staff at the male-only group home and the female-only group home in Kentucky said the facilities served more delinquent offenders than status offenders. The staff at the four group homes also said their facilities were not overcrowded because youth were not accepted unless a bed was available. According to these staff, the average length of stay for female and male status offenders ranged from 182 days (about 6 months) to 274 days (about 9 months). The one nonsecure, nonresidential program for status offenders that we visited was the Practical and Cultural Education Center for Girls, located in Jacksonville, FL. The Center’s program, which has been nationally recognized, was not overcrowded because a female student was accepted only if classroom space was available. A waiting list was maintained to place females as space became available. According to officials of this program, the average length of attendance for females in the program was 243 days (about 8 months). At each of the 15 facilities visited, we obtained gender-specific information about counseling, educational, and medical services, that is, the services most relevant to the principal needs of status offenders. The results of our visits are summarized below and in table IV.2. Female and male status offenders did not routinely receive individual or group counseling at the four secure detention facilities. These facilities, however, could obtain counseling services from community resources if staff or resident youth (including female and male status offenders) requested such services. Juvenile court judges could also order the facilities to provide counseling. For example, professional staff at Florida’s Duval County Juvenile Detention Center told us that juvenile court judges sometimes ordered the detention center to undertake social assessments and provide counseling services to female and male status offenders placed in the facility. The detention center officials said that status offenders were transported to community health-care providers to receive these services. All six shelters, the four group homes, and the nonresidential program provided a variety of on-site counseling services to individuals, groups, or both. Female and male status offenders, however, were provided the same types and amounts of counseling within the co-educational facilities in which they were placed, according to officials at the facilities. Counseling topics could cover physical and sexual abuse, as well as substance abuse issues. Individual counseling ranged from 2 hours to 6 hours per week at the shelters, 1 hour to 4 hours per week at the group homes, and 5 hours per week at the nonresidential program. Group counseling ranged from 4 hours to 14 hours per week at the shelters, 3 hours to 5 hours per week at the group homes, and 1 hour per week at the nonresidential program. All of these facilities had arrangements with community health-care providers to supply additional counseling when needed. Some facility staff told us that female and male status offenders needed family counseling, but such service was difficult to maintain or provide. For example, two of the shelters offer family counseling, but the programs reportedly were poorly attended. Staff from two of the group homes said that they could not offer family counseling because court-ordered placements resulted in youth coming from all areas of the state. These officials explained that family counseling was impractical because the parents would have been unable to attend the sessions since they did not live close to the respective facility. Nineteen of the 34 juvenile justice officials and 6 of the 15 service providers we interviewed emphasized that family counseling is essential because female and male status offenders were running from some form of abuse or neglect at home. According to these officials, family counseling could help correct poor parenting skills, which is a contributor to abuse and neglect. Staff at one of the group homes we visited told us that limited resources were used most effectively only when the whole family was included in the treatment plan. According to the staff, a group-home facility could build a youth’s self-esteem and correct negative behaviors, but frequently the youth may be released from the group home and returned to the environment that caused the negative behaviors. The staff said that in these situations, where the family issues had not been addressed, the youth was likely to revert to negative behaviors. Staff at other facilities told us that parents and guardians did not always give female and male status offenders the support needed to address and solve problems. For example, an official at one shelter said they were unable to return a pregnant runaway to her home because her single-parent mother was using drugs and had just been evicted from their apartment. The 15 facilities we visited provided a variety of educational services. At three of the four secure detention facilities, status offenders generally attend on-site schools staffed by licensed teachers. The other secure facility, Kentucky’s Big Sandy Regional Detention Center, did not have an on-site school. A representative from the detention center told us resident youth are provided educational services when the juvenile court judges order the public schools to transport the youth to their classes. At the six (co-educational) shelters, we found no differences in the educational services provided to female and male status offenders. Status offenders at four of the six shelters either attended the local public schools or received daily or part-time instruction at the respective facility. These youth generally did not attend the local schools if they had dropped out of school, were studying for their general equivalency diploma, or did not reside in the county where the shelter was located. At the fifth shelter, all female and male status offenders attended an on-site school staffed by licensed teachers. At the sixth shelter, all female and male status offenders attended local schools. Female and male status offenders also received similar educational services at the four gender-specific group homes. For example, we visited one male-only group home and one female-only group home in Kentucky that were operated by the same organization. Both of these group homes sent the youth to county alternative schools. The two remaining group homes (each serving females only) sent resident youth to the local public schools. Education was a main component of the services offered status offenders at the nonresidential program we visited in Florida. Licensed teachers provided basic instruction, which enabled the youth to earn high-school credits that would aid them in returning to the public schools or obtaining a general equivalency diploma. Classes were conducted on the campus of the local community college, which gave the youth access to other educational services as well. According to service providers at the 15 facilities we visited, females and males were receiving needed medical services, either provided through arrangements by parents or guardians or from local community health-care providers. For example, pregnant females admitted to some facilities received prenatal care. Facility staff at one shelter told us that a male had been referred to a local dermatologist for severe acne. In addition, staff at several facilities reported that many of the females and males had to be referred to community dentists because the youth had never received dental care before arriving at the facilities. Also, females and males reportedly were given health screenings and/or physical examinations before or after admission. The health screenings included a list of questions to determine each youth’s immediate health needs. The physical examinations typically involved a nurse’ taking each youth’s temperature and blood pressure and checking for any signs of physical distress. All four of the secure detention facilities and one of the six shelters had on-site medical personnel. These personnel ranged from a nurse, who was available from 3 days to 7 days per week, to a doctor, who was available from 1 day to 5 days per week. Although these five facilities had on-site medical personnel to address minor medical problems or dispense prescription medication, some service providers at these facilities told us that their facilities were often overcrowded and needed additional medical staff. For example, at two secure detention facilities, the on-site nurse could not fully treat all of the youth on each day’s sick list and, thus, had to select patients. The remaining five shelters, four group homes, and the nonresidential facility did not have on-site medical services. Four of the 10 officials at these facilities told us that such resources were needed. For instance, one official explained that counselors were having to use their already limited counseling time to dispense medication and transport youth to doctors’ offices. Regular on-site individual and group counseling were not provided. Youth who displayed suicidal tendencies or requested counseling services were referred to community health-care providers. Youth attended an on-site school, which had six classrooms and eight licensed teachers provided by the county. Daily co-educational classes followed a basic curriculum that included language arts, mathematics, science, and social studies. The center also provided drug education and general equivalency diploma preparation. On-site medical services included a nurse, available 5 days a week, and a doctor, available 2 days a week. Youth were given a physical within 3 days after admittance to the facility. The most common health problems were colds and sore throats. Regular on-site individual and group counseling were not provided. Youth who displayed suicidal tendencies or requested counseling services were referred to community health-care providers. Youth attended the on-site school, which had one classroom and one licensed teacher provided by the county. Daily co-educational classes followed a basic curriculum. The center also provided remedial education. Classes could be canceled when the center was overcrowded. On-site medical services included a nurse, available 5 days a week, and a doctor, available 5 days a week. Youth were given a physical within 3 days after admittance to the facility. The most common health problems were colds and sexually transmitted diseases. Regular on-site individual and group counseling were not provided. Youth who displayed suicidal tendencies or requested counseling services were referred to community health-care providers. The center did not have a school. Some youth attended the local public school when the juvenile court judges ordered the schools to transport the youth to their classes. On-site medical services included a nurse, available 5 days a week, and a doctor, available 1 day a week. Youth were given a physical 7 days after admittance to the facility. The most common health problems were headaches. (continued) Individual and group counseling were not regularly provided unless the youth displayed suicidal tendencies or requested the services. The facility had one counselor on staff to meet these requests. Youth attended the on-site school, which had six classrooms and six teachers provided by the county. Daily classes followed a basic curriculum. The center also provided independent-living skills instruction and remedial education. Classes could be canceled when the center was overcrowded. On-site medical services included a nurse, available 7 days a week, and a doctor, available 3 days a week. Youth were given a physical 7 days after admittance to the facility. The most common health problems involved sexually transmitted diseases. Youth received approximately 3 hours of individual counseling and 4 hours to 5 hours of group counseling each week. Family counseling was provided on a voluntary basis. The counselor-to- resident ratio was 1 to 10. The shelter did not have an on-site school. Youth attended the local public schools or an alternative school operated by the county. The shelter provided independent-living skills instruction and health education. The shelter did not have on-site medical services. Parents or guardians were responsible for providing any needed medical services. The most common health problem was asthma. Youth received approximately 3 hours of individual counseling and 7 hours of group counseling each week. Family counseling was provided on a voluntary basis. The counselor-to- resident ratio was 1 to 8. Youth enrolled in the local public schools continued to attend their regular classes. Youth who were not enrolled in the local schools attended an on-site life skills school that was taught by a retired teacher. The life skills school included remedial instruction, independent-living skills instruction, and health education. The shelter did not have on-site medical services. Parents or guardians were responsible for providing any needed medical services. The most common health problems were colds, sinus infections, and lice. (continued) Youth received approximately 2 hours of individual counseling a week. Group counseling was not provided. The counselor-to- resident ratio was 1 to 7. Youth residing in the county attended local public schools. A teacher conducted on-site remedial instruction 4 hours a week for out-of-town youth or youth who had dropped out of school. The shelter also offered independent-living skills instruction and general equivalency diploma preparation. On-site medical services included a nurse, available 3 days a week, and a doctor, available 1 day a week. Youth were given a physical within 3 days after admittance to the facility. The most common health problems were allergies and colds. Youth received approximately 4 hours of individual counseling and 4 hours of group counseling each week. The counselor-to- resident ratio was 1 to 2. Youth either attended the local public schools or received tutoring at the facility. A licensed teacher provided instruction every-other-day to youth not attending the local public schools. Facility staff provided instruction on the days the tutor was not available. The shelter also provided independent-living skills instruction, health education, and parenting classes. The shelter did not have on-site medical services. Parents or guardians were responsible for providing any needed medical care. The most common health problem for both females and males was hepatitis B. The females commonly needed prenatal care or treatment for sexually transmitted diseases. The males commonly needed treatment for colds or dental problems. (continued) Youth received 6 hours of individual counseling and 4 hours of group counseling each week. The counselor-to- resident ratio was 1 to 2. Youth who resided in the county attended the local public schools. A licensed teacher provided instruction 4 hours a day, 3 days a week to youth not enrolled in the local public schools. The shelter also provided independent-living skills instruction, health education, and parenting classes. The shelter did not have on-site medical services. The residents’ health-care needs were met by local health-care providers, including a hospital, women’s clinic, and pharmacy. Females commonly requested gynecological services. Youth received 6 hours of individual counseling and 14 hours of group counseling each week. The counselor-to- resident ratio was 1 to 8. Youth attended the on-site school, which had two classrooms and two licensed teachers. Daily co-educational classes followed a basic curriculum. The shelter also provided independent-living skills instruction, drug and health education, and parenting classes. The shelter did not have on-site medical services. Youth received a physical within 2 days after arriving at the county’s juvenile detention center, which also provided any additional medical care. The most common health problems were asthma, lice, scabies, and sexually transmitted diseases. (continued) Youth received 1 hour of individual counseling and 3 hours of group counseling each week. The counselor-to- resident ratio was 1 to 8. The shelter did not have an on-site school. Youth attended an off-site alternative school operated by the state and the local public school district. Daily co-educational classes followed a basic curriculum, and most of the classes were remedial. The alternative school also provided counseling, health education, and recreational activities. The home did not have on-site medical services. Youth received a physical from community health-care providers within 7 days after admittance to the facility. The most common health problems were severe acne and dental problems. Youth received 4 hours of individual counseling and 3 hours to 5 hours of group counseling each week. The counselor-to- resident ratio was 1 to 4. The group home did not have an on-site school. Youth attended an off-site alternative school operated by the state and local public school district. Daily classes followed a basic curriculum, and most of the classes were remedial. The alternative school also provided counseling, health education, and recreational activities. The home did not have on-site medical services. Physical and general medical care were obtained from community health-care providers. The most common health problems were colds, dental problems, sexually transmitted diseases, and urinary tract infections. (continued) Youth received at least 1 hour of individual counseling and 3 hours of group counseling each week. The counselor-to- resident ratio was 1 to 5. Youth attended the local public schools or an on-site alternative school. The alternative school, which followed a basic curriculum, had one licensed teacher provided by the local school district. The home did not have on-site medical services. Physicals and general medical care were obtained from community health-care providers. The most commonly requested treatment needs were dental and gynecological services. Youth received 2 hours of individual counseling each week. No formal group counseling was provided. The counselor-to- resident ratio was 1 to 4. The group home did not have an on-site school. Youth attended the local public schools 5 days a week. The home did not have on-site medical services. Youth had to receive a physical prior to admission and general medical care from community health-care providers. The most common health problem was sprained ankles. (continued) Each youth was assigned an adviser who spent approximately 5 hours a week discussing personal and academic issues and goals. The adviser-to- student ratio was 1 to 10. In addition, a therapist provided each girl with at least 1 hour of counseling (individual, group, or family) each week. The Center is an alternative school that helps girls obtain high school credits or their general equivalency diplomas. The program had seven teachers and seven classrooms. The daily classes followed a basic curriculum, and some of the classes were remedial. The program also offered independent-living skills instruction, drug and health education, and parenting classes. The home did not have on-site medical services. Parents, guardians, or the students themselves were responsible for obtaining any needed medical services. Prenatal care and gynecological services were obtained from community health-care providers with the permission of the parents or guardians. The most common health problem was asthma. Danny R. Burton, Regional Management Representative Teresa R. Russell, Evaluator-in-Charge Christina M. Nicoloff, Senior Evaluator Donna B. Svoboda, Evaluator Frederick T. Lyles, Jr., Evaluator Virginia B. Dandy, Technical Information Specialist The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a legislative requirement, GAO reviewed whether gender bias existed in state juvenile justice systems between 1986 and 1991. GAO found that: (1) there was minimal gender bias in state juvenile justice systems during that period; (2) 40 percent of the 500,620 juvenile status-offender cases between 1986 to 1991 involved females and females and males had similar probabilities of being detained, adjudicated, or placed for a status offense; (3) the offenders' prior offense history and age generally affected the judicial outcomes; (4) although there were few gender-based differences in the availability of counseling, educational, and medical services for females and males, the type and extent of such services varied by facility; (5) females were sometimes given admission physicals and additional access to health care services that were not applicable to males; (6) county probation officers believed that there were not any significant differences in the way females and males with similar status-offense histories were treated within their juvenile justice systems; (7) juvenile probation officers reported that treatment options were equally available for detained female and male status offenders and more facilities were needed for both males and females; (8) it could not determine whether there was a disproportionate number of facilities for males; (9) local officials believed that more facilities and early intervention services were needed for status offenders of both sexes; and (10) there were mixed views about whether the needs of status offenders were better met by co-educational or single-gender facilities.
The Immigration Reform and Control Act of 1986 created the Visa Waiver Program as a pilot program. It was initially envisioned as an immigration control and economic promotion program, according to State. Participating countries were selected because their citizens had a demonstrated pattern of compliance with U.S. immigration laws, and the governments of these countries granted reciprocal visa-free travel to U.S. citizens. In 2000, the program became permanent under the Visa Waiver Permanent Program Act. In 2002, we reported on the legislative requirements to which countries must adhere before they are eligible for inclusion in the Visa Waiver Program. In general, these are the requirements: A low nonimmigrant visa refusal rate. To qualify for visa waiver status, a country must maintain a refusal rate of less than 3 percent for its citizens who apply for business and tourism visas. A machine-readable passport program. The country must certify that it issues machine-readable passports to its citizens. As of June 26, 2005, all travelers are required to have a machine-readable passport to enter the United States under this program. Reciprocity. The country must offer visa-free travel for U.S. citizens. Figure 1 shows the number of foreign nationals admitted to the United States under the program in recent years (see app. III for more detailed admissions statistics). Persons entering the United States under the Visa Waiver Program must have a valid passport issued by the participating country and be a national be seeking entry for 90 days or less as a temporary visitor for business or have been determined by CBP at the U.S. port of entry to represent no threat to the welfare, health, safety, or security of the United States; have complied with conditions of any previous admission under the program (for example, individuals must have stayed in the United States for 90 days or less during prior visa waiver visits); if entering by air or sea, possess a round-trip transportation ticket issued by a carrier that has signed an agreement with the U.S. government to participate in the program, and must have arrived in the United States aboard such a carrier; and if entering by land, have proof of financial solvency and a domicile abroad to which they intend to return. Following the events of September 11, 2001, Congress passed additional laws to strengthen border security policies and procedures, and DHS and State instituted other policy changes that have affected a country’s qualifications for participating in the Visa Waiver Program. For example, all passports issued after October 26, 2005, must contain a digital photograph printed in the document; passports issued to visa waiver travelers after October 26, 2006, must be electronic (e-passports). E- passports aim to enhance border security by making it more difficult to misuse the passport to gain entry into the United States. Travelers with passports issued after the deadline that do not meet these requirements must obtain a visa from a U.S. embassy or consulate overseas before departing for the United States. In addition, the May 2002 Enhanced Border Security and Visa Reform Act required that participating countries certify that the theft of their blank passports is reported to the U.S. government in a timely manner. Moreover, the act increased the frequency—from once every 5 years to once every 2 years—of mandated assessments of the effect of each country’s continued participation in the Visa Waiver Program on U.S. security, law enforcement, and immigration interests. The Visa Waiver Program has many benefits, including facilitating international travel for millions of foreign citizens seeking to visit the United States each year, creating substantial economic benefits to the United States, and allowing State to allocate resources to visa-issuing posts in countries with higher-risk applicant pools. However, there are inherent security and law enforcement risks in the program that pose challenges to the United States. The Visa Waiver Program was created to facilitate international travel without jeopardizing the welfare or security of the United States, according to the program’s legislative history. In fact, visa waiver travelers have represented about one-half of all nonimmigrant admissions to the United States in recent years, as demonstrated by figure 2 below. According to economic and commercial officers at several of the U.S. embassies we visited, visa-free travel to the United States boosts international business travel and tourism, as well as airline revenues, and creates substantial economic benefits to the United States. In its report accompanying the 2000 bill to make the program permanent, the House Committee on the Judiciary acknowledged the program’s importance to the U.S. travel and tourism industry, and the benefit it provides to American citizens by allowing reciprocal visa-free travel to visa waiver countries. As we reported in 2002, any decision to eliminate the program could discourage some business and tourism in the United States. In addition, visa waiver countries could begin requiring visas for U.S. citizens traveling to the 27 participating countries for temporary business or tourism purposes, which would impose a burden of additional cost and time on U.S. travelers to these countries. Moreover, the program allows State to allocate its resources to visa- issuing posts in countries with higher-risk applicant pools. In 2002, we reported that eliminating the program would increase State’s resource requirements. Specifically, if the program were eliminated, we estimated that State’s initial costs at that time to process the additional workload would likely range between $739 million and $1.28 billion and that annual recurring costs would likely range between $522 million and $810 million. For example, millions of visa waiver travelers who have benefited from visa-free travel would need to obtain a visa to travel to the United States if the program did not exist. Furthermore, we reported that U.S. officials, including those from State as well as from some law enforcement agencies, said that eliminating the Visa Waiver Program could have negative implications for U.S. relations with governments of participating countries and could impair their cooperation in efforts to combat terrorism. The Visa Waiver Program can pose risks to U.S. security, law enforcement, and immigration interests because some foreign citizens may exploit the program to enter the United States. In particular, visa waiver travelers are not subject to the same degree of screening as those travelers who must first obtain a visa before arriving in the United States. Furthermore, lost and stolen passports from visa waiver countries could be used by terrorists, criminals, and immigration law violators to gain entry into the United States. Since September 11, 2001, the visa issuance process has taken on greater significance as an antiterrorism tool, as we have previously reported. Those travelers who must apply for visas before traveling to the United States receive two levels of screening before entering the country (see fig. 3). However, visa waiver travelers are first screened in person by a CBP inspector once they arrive at the U.S. port of entry, perhaps after having already boarded an international flight bound for the United States with a fraudulent travel document. For visa waiver travelers, CBP primary inspectors observe the applicant, examine his or her passport, collect the applicant’s fingerprints as part of the U.S. Visitor and Immigrant Status Indicator Technology program (US-VISIT), and check his or her name against automated databases and watch lists, which contain information regarding the admissibility of aliens, including known terrorists, criminals, and immigration law violators. However, according to the DHS OIG, primary border inspectors are disadvantaged when screening Visa Waiver Program travelers because they may not know the alien’s language or local fraud trends in the alien’s home country, nor have the time to conduct an extensive interview. In contrast, non-visa-waiver travelers, who must obtain a visa from a U.S. embassy or consulate, undergo an interview by consular officials overseas, who conduct a rigorous screening process when deciding to approve or deny a visa. As we have previously reported, State has taken a number of actions since 2002 to strengthen the visa issuance process as a border security tool. Moreover, consular officers have more time to interview applicants and examine the authenticity of their passports, and may also speak the visa applicant’s native language, according to consular officials. Inadmissible travelers who need visas to enter the United States may attempt to acquire a passport from a Visa Waiver Program country to avoid the visa screening process. One of the Visa Waiver Program Oversight Unit’s primary concerns is the potential exploitation by terrorists, immigration law violators, and other criminals of a visa waiver country’s lost or stolen passports. DHS intelligence analysts, law enforcement officials, and forensic document experts all acknowledge that misuse of lost and stolen passports is the greatest security problem posed by the Visa Waiver Program. Lost and stolen passports from visa waiver countries are highly prized travel documents, according to the Secretary General of Interpol. Moreover, Visa Waiver Program countries that do not consistently report the losses or thefts of their citizens’ passports, or of blank passports, put the United States at greater risk of allowing inadmissible travelers to enter the country. Fraudulent passports from Visa Waiver Program countries have been used illegally by travelers seeking to disguise their true identities or nationalities when attempting to enter the United States. For example, from January through June 2005, DHS reported that it confiscated, at U.S. ports of entry, 298 fraudulent or altered passports issued by Visa Waiver Program countries that travelers were attempting to use for admission into the United States (see table 1). Two more recent cases demonstrate continued attempts to enter the United States with fraudulent passports issued by visa waiver countries: In December 2005, a Pakistani citizen attempted to enter the United States under the program with a counterfeit United Kingdom passport that she had purchased. During secondary inspection, the CBP officer detected flaws in the passport. In March 2006, an Albanian citizen attempted to enter the United States using a Belgian passport that he had purchased. The traveler confessed to this upon questioning by CBP officers during secondary inspection. In 2004, the DHS OIG reported that aliens applying for admission to the United States using lost or stolen passports have little reason to fear being caught. DHS’s OIG reported that a lack of training hampered border inspectors’ ability to detect passport fraud among Visa Waiver Program travelers, and recommended that CBP inspectors receive additional training in fraudulent document detection. At that time, the 12-week training course for new inspectors at the Federal Law Enforcement Training Center devoted 1 day to passport fraud, according to the OIG. Currently, CBP dedicates 16 hours out of the 16-week basic training program to fraudulent document detection training for new border inspectors, and provides additional courses for inspectors throughout their assignments at ports of entry. Nevertheless, training officials said that fraudulent and counterfeit passports are extremely difficult to detect, even for the most experienced border inspectors—which makes it imperative that lost and stolen passports are reported on a timely basis. Although DHS has intercepted some travelers with fraudulent passports at U.S. ports of entry, DHS officials acknowledged that an undetermined number of inadmissible aliens may have entered the United States using a lost or stolen passport from a visa waiver country. According to State, these aliens may have been inadmissible because they were immigration law violators, criminals, or terrorists. Following are several examples: In July 2005, two aliens successfully entered the United States using lost or stolen Austrian passports. DHS was not notified that these passports had been lost or stolen prior to this date; the aliens were admitted, and there is no record of their departure, according to CBP. In October 2005, CBP referred this case to DHS Immigration and Customs Enforcement for further action. In June 2005, CBP inspectors admitted into the United States two aliens using Italian passports that were from a batch of stolen passports. CBP was notified that this batch was stolen; however, the aliens altered the passport numbers to avoid detection by CBP officers. DHS has no record that these individuals departed the United States. Also in June 2005, three aliens traveling together—all using fraudulent Italian passports—were interviewed at primary inspection. CBP referred one alien, an Albanian citizen, to secondary inspection because she was reluctant to answer the inspector’s questions. In secondary inspection, CBP determined that her passport had been altered. CBP admitted the other two aliens, but subsequently determined that their passports were part of the batch of stolen Italian passports cited in the previous example. In July 2004, DHS created the Visa Waiver Program Oversight Unit within OIE to monitor the Visa Waiver Program. Its mission is to oversee Visa Waiver Program activities and monitor countries’ adherence to the program’s statutory requirements, ensuring that the United States is protected from those who wish to do it harm or violate its laws, including immigration laws. In 2004, DHS reviewed the law enforcement and security risks posed by the continued participation of 25 of the 27 countries in the program. However, we identified problems with the country review process by which DHS assesses these risks. For example, DHS did not involve key interagency stakeholders in certain portions of the review process, and did not establish transparent protocols for the country assessments—including internal milestones or deadlines for completing the final country assessments, the goals of the site visits, and an explanation of the clearance process. Furthermore, OIE is unable to effectively monitor the immigration, law enforcement, and security risks posed by visa waiver countries on a continuing basis because of insufficient resources. In April 2004, the DHS OIG identified significant areas where DHS needed to strengthen and improve its management of the Visa Waiver Program. For example, the OIG found that it was unclear who was managing the program following the dissolution of the Immigration and Naturalization Service. In addition, the OIG found that a lack of funding, trained personnel, and other issues left DHS unable to comply with the mandated biennial country assessments. In response to these findings, DHS established OIE’s Visa Waiver Program Oversight Unit in July 2004, and named a director to manage the office. Since its establishment, DHS, and particularly OIE, has made strides to address concerns raised by the 2004 OIG review. Specifically, DHS has: conducted site visits in all 27 participating countries; completed comprehensive assessments of 25 participating countries, examining the effect of continued participation in the Visa Waiver Program on U.S. security and law enforcement interests, including the enforcement of immigration laws; identified risks in some of the countries and brought the concerns to the attention of host-country governments in five visa waiver countries; submitted a six-page report to Congress in November 2005 that summarized the findings from the 2004 assessments; and initiated assessments for the remaining two countries in August and September 2005. In addition, in September 2005, DHS and State organized a technical conference in Washington, D.C., with representatives from Visa Waiver Program countries, to discuss the passport requirements for visa waiver travelers, and the October 2005 and 2006 deadlines for digital photographs and e-passports, respectively. Together, these actions demonstrate significant improvements since the April 2004 OIG report. Despite these steps to strengthen and improve the management of the program, we identified several problems with the process DHS uses to assess the risks posed by each of the visa waiver countries’ continued participation in the program—namely the mandated biennial country assessment process. For the 2004 assessments, we found the following: some key stakeholders were excluded from the decision-making process; the reviews lacked clear criteria to make key judgments; there was inconsistent preparation for the in-country site visits for the reviews; and DHS and its interagency partners neither completed the 25 country assessments nor issued the summary report to Congress in a timely manner. OIE has acknowledged such weaknesses and has begun to make adjustments; however, concerns remain. We found that the review process lacked clear protocols, as key stakeholders were left out of the report development process. Specifically, after conducting the site visits and contributing to the reports on the site visits, DHS and the interagency working group did not seek input from all site visit team members while drafting and clearing the final country assessments and subsequent report to Congress. For example, DHS’s forensic document analysts, who participated in the site visits in 2004, told us that they did not see, clear, or comment on the draft country assessments, despite repeated attempts to obtain copies of them. Thus, these analysts questioned the integrity of the process because they had not seen how their analyses were incorporated into the final assessments. Additionally, State’s headquarters officers who cover diplomatic relations in Visa Waiver Program countries, as well as embassy officials in all of the posts we visited, stated that they were not asked to review or provide comments on the draft assessments, nor had they seen the final assessments. CBP officials also stated that they repeatedly requested copies of the country assessments and subsequent report to Congress, but did not receive them. According to State’s Bureau of Consular Affairs, DHS did not seek feedback from U.S. embassies and State’s regional bureaus on the draft site visit or individual country assessments. Because these assessments contained classified information, OIE officials told us that they were not broadly distributed in draft or final form. Nevertheless, without this information, key stakeholders could not be effective advocates for U.S. concerns. We found that DHS did not have clear criteria when assessing each country’s participation in the program to determine at what point security concerns would trigger discussions with foreign governments about these concerns and an attempt to resolve them. As previously mentioned, the DHS-led interagency working group identified five countries from its 2004 assessment with significant security concerns, and DHS, in coordination with State, discussed these concerns with government officials. Furthermore, U.S. embassies issued a formal diplomatic demarche to the five governments regarding the concerns in March 2005. However, while the working group also had concerns with a sixth country, it decided not to issue a demarche to this government. According to State, the working group determined that the concerns identified in this country were not directly related to the Visa Waiver Program and the country’s participation in the program. However, OIE officials and other working group members stated that they did not use any standard criteria in making this determination. State officials agreed that qualitative and/or quantitative criteria would be useful when making these determinations, although DHS stated that the criteria should be flexible. During our visit to the U.S. embassy in the sixth country, which was not issued a demarche, U.S. officials told us they were unaware that the working group had discussed security concerns in the context of the country assessment. While embassy officials had already been addressing these issues as part of their mission, they said that they would have likely seen greater progress made in discussions with foreign government officials if all parties had known that there was a potential link between these security concerns and visa waiver requirements. The site visits associated with each country review were not always well- prepared and lacked a consistent approach, according to the site visit team members. Several team members representing different agencies stated that they did not receive adequate information and guidance prior to conducting the site visits and, thus, were not well-prepared to conduct the visits. DHS did not brief or train the site visit team members prior to conducting the 2004 reviews, and many said that the goals of the in- country visits were not clear. One team member stated that the site visits were largely “fact-finding trips,” as opposed to a targeted analysis of law enforcement and security concerns. Moreover, prior to conducting the site visits, DHS sent each country a background questionnaire; however, OIE and team members stated that some countries did not provide responses to the questionnaire prior to the site visit, which would have been useful for preparation. Furthermore, senior U.S. officials in each of the embassies we visited stated that the goals and priorities of the 2004 DHS-led site visit teams were not clear to them. Consular officials at half of the posts we visited also said that the site visit teams arrived on short notice and did not give them adequate time to prepare. As a result, the teams may not have made the most efficient use of their time in-country, and may not have gathered their information on a consistent basis. DHS did not issue, in a timely manner, the summary report to Congress that generally described the overall findings from the 25 country assessments. Although DHS is mandated to conduct the country assessments every 2 years, Congress did not establish a deadline by which the assessments must be completed or the summary report issued. OIE, State, and Justice officials acknowledged that the assessments took too long to complete. The interagency teams conducted site visits as part of the country assessments from May through September 2004, and transmitted the final summary report to Congress more than 1 year later, in November 2005. The report to Congress was a six-page summary that did not include detailed descriptions of the law enforcement and security risks identified during the review process, which were discussed at length in the individual country assessments. According to interagency working group members, DHS did not establish internal milestones or deadlines for completing the final country assessments. OIE officials attributed the lengthiness of the assessment process to the multiple rounds of clearances for each of the 25 assessments and the summary report. While the country assessments were awaiting clearance, there were missed opportunities to capture more recent developments, and the final assessments contained dated information or were incomplete. For example, in May 2005, a post in one visa waiver country was notified there had been a large-scale, high-profile theft of blank passports. While the U.S. government was aware of this theft, this information was not captured in that country’s assessment as it was being cleared. Moreover, the teams collecting information about the visa waiver countries’ risks in 2004 used, in some cases, information from 2 years prior; by the time the summary report was issued in November 2005, some of the data was over 3 years old. As a result of this lengthy process, the final report presented to Congress did not necessarily reflect the current law enforcement and security risks posed by each country, or the positive steps that countries had made to address these risks (see fig. 4). OIE officials acknowledged weaknesses in the 2004 reviews, and made some adjustments for the 2005 country assessments for Italy and Portugal, the two remaining countries. For the 2005 reviews, DHS conducted a 1-day training seminar to explain the objectives of the visit and share information about the countries to the site visit teams, including findings from prior country assessments. Additionally, the team members met prior to conducting the site visits, and reconvened upon returning to Washington, D.C., to ensure consensus on their report on the site visit. However, the 2005 country review process still lacked consistency and transparency. In particular, DHS has not finalized its operating procedures for site visits. The site visit teams traveled to the remaining two countries in August and September 2005; however, as of June 2006, DHS had neither updated the interagency working group team members on the status of the reviews of Italy and Portugal, nor provided them with a timeline for proceeding with the review. Furthermore, stakeholders continued to express concern about DHS’s lack of communication about the process and the findings, and no changes have been made to the review process that would make the final report to Congress timely. Therefore, there are no assurances that the next biennial assessment round will proceed more quickly than the previous round. DHS cannot effectively monitor the law enforcement and security risks posed by visa waiver countries on a consistent, ongoing basis because it has not provided OIE with adequate staffing and resources. Furthermore, we found weaknesses in communication between DHS and overseas posts and other agencies. OIE is limited in its ability to achieve its mission because of insufficient staffing and funding. The office has numerous responsibilities, including conducting the mandated biennial country reviews; monitoring law enforcement, security, and immigration concerns in visa waiver countries on an ongoing basis; working with countries seeking to become members of the Visa Waiver briefing foreign government representatives from participating visa waiver countries, as well as those countries that are seeking admission into the program, on issues related to program membership. In 2004, the DHS OIG found that OIE’s lack of resources directly undercut its ability to assess a security problem inherent in the program—lost and stolen passports. The office received funding to conduct the country reviews in 2004 and 2005; however, OIE officials indicated that a lack of funding and full-time staff has made it extremely difficult to conduct additional overseas fieldwork, as well as track ongoing issues of concern in the 27 visa waiver countries—a key limitation in DHS’s ability to assess and mitigate the program’s risks. According to OIE officials, the unit developed a strategic plan to monitor the program, but has been unable to implement its plan with its current staffing. As of June 2006, the office was staffed with two full-time employees, as well as one temporary employee from another DHS component. Moreover, OIE does not have a separate budget, but must request funds (for example, to conduct travel related to the Visa Waiver Program) from the Office of Policy Development. In addition, program officials stated that they have paid for their own office supplies using their personal savings due to funding constraints. Without adequate resources, OIE is unable to monitor and assess participating countries’ compliance with the Visa Waiver Program’s statutory requirements. DHS has not clearly communicated its mission to stakeholders at overseas posts, nor identified points of contact within U.S. embassies, so it can communicate directly with field officials positioned to monitor countries’ compliance with Visa Waiver Program requirements and report on current events and issues of potential concern. In particular, within DHS’s various components, we found that OIE is largely an unknown entity and, therefore, is unable to leverage the expertise of DHS officials overseas. Specifically, only 3 of the 15 DHS field officials with whom we spoke in the six visa waiver countries we visited were aware of the Visa Waiver Program Oversight Unit and its mission. A senior DHS representative at one post showed us that her organizational directory did not contain contact information for OIE. In addition, an official from the Immigration and Customs Enforcement’s Office of International Affairs acknowledged that DHS needs a better communication plan for the Visa Waiver Program. He stated that DHS had not prioritized the workload for all its officials overseas, including their role in overseeing the Visa Waiver Program; he also told us that OIE had not yet articulated what information it needed, designated a mechanism to share that information, or gained agency-wide acceptance of procedures for monitoring the compliance of visa waiver countries. In fact, a senior DHS official in Washington, D.C., told us that he may find out about developments—either routine or emergent—in visa waiver countries by “happenstance.” Without an outreach strategy, DHS is not able to leverage its existing resources at U.S. embassies in all visa waiver countries. Furthermore, key stakeholders, who are in a position to influence and monitor visa waiver countries’ compliance with the program’s requirements, were not informed of the major findings of the country assessments. In fact, at the time of our visits, ambassadors or deputy chiefs of mission in each of the six posts told us that they were not fully aware of the extent to which the country assessments discussed law enforcement and security concerns posed by the continued participation of the country in the program. The Deputy Chief of Mission at one post stated that without the appropriate information, such as was contained in the assessments, embassy officials could not be effective agents for the U.S. government with regard to these issues. Bureau of Consular Affairs officials in Washington, D.C., agreed that any concerns identified in the assessments should be brought to the attention of the embassy, so that the posts can address the concerns accordingly. Due to the lack of outreach and clear communication about its mission, OIE is limited in its ability to monitor the day-to-day law enforcement and security concerns posed by the Visa Waiver Program, and the U.S. government is limited in its ability to influence visa waiver countries’ progress in meeting requirements. We also found gaps in interagency communication. According to OIE, State plays a significant role in conveying information relevant to the Visa Waiver Program to U.S. embassy officials and their host government counterparts. Therefore, it is important that State and DHS have clear lines of communication. For example, in October 2005, one government expressed willingness to share data on lost and stolen issued passports with the United States, and asked for technical specifications on how to do so. However, at the time of our February 2006 visit, the post in that country had not received direction from headquarters on how this passport information should be shared. Moreover, OIE officials told us that they were unaware that this country was willing to share this data until we brought it to their attention in early March 2006. As a result, the United States missed opportunities to potentially deter the fraudulent use of passports from this country, which in fact has the highest rate of misuse among all visa waiver countries, according to DHS. Additionally, a senior consular official in another participating country expressed frustration that DHS had not fully explained to embassy officials why visa waiver countries needed to report lost and stolen passport information directly to the United States and Interpol, which maintains a global database of lost and stolen travel documents. Several other senior consular officials also expressed the need for more information about OIE’s mission and goals, as well as the desired role for overseas posts. DHS has taken some actions to mitigate the risks of the Visa Waiver Program, such as terminating the use of the German temporary passport for travel under the program. Since 2002, the law has required the timely reporting of passport thefts for continued participating in the Visa Waiver Program, but DHS has not established and communicated time frames and operating procedures to participating countries. In addition, DHS has sought to expand this requirement to include the reporting of data, to the United States and Interpol, on lost and stolen issued passports; however, participating countries are resisting these requirements, and DHS has not yet issued guidance on what information must be shared, with whom, and within what time frame. Furthermore, U.S. border inspectors are unable to automatically access Interpol’s data on reported lost and stolen passports, which makes it more difficult to detect reported lost or stolen passports at U.S. ports of entry. As previously mentioned, during the 2004 assessment process, the working group identified security concerns in several participating countries, and DHS took actions to mitigate some of these risks. For example, DHS determined that several thousand blank German temporary passports had been lost or stolen, and that Germany had not reported some of this information to the United States. In March 2005, at the working group’s request, the U.S. embassy in Berlin conveyed these concerns to the German government to seek a solution. In March 2006, DHS determined that sufficient progress had not been made to address the concern over German temporary passports, and, as of May 1, 2006, German temporary passport holders are not allowed to travel to the United States under the Visa Waiver Program without a visa. DHS has also made some progress in enforcing new passport security measures. For example, DHS has enforced an October 26, 2005, deadline requiring travelers under the Visa Waiver Program to have digital photographs in their passports. Specifically, Italian and French citizens with noncompliant passports issued after October 26, 2005, must first obtain a visa before traveling to the United States because these countries did not meet the deadline. Furthermore, as previously mentioned, by October 26, 2006, visa waiver travelers must have e-passports for travel under the program. E-passports aim to enhance the security of travel documents, making it more difficult for imposters or inadmissible aliens to misuse the passport to gain entry into the United States. DHS and State officials told us that nearly all 27 participating countries report that they are on schedule to meet this deadline. According to US-VISIT, DHS will deploy machines to read the e-passports at 33 airports by the October 2006 deadline, covering about 98 percent of all visa waiver travelers. While US- VISIT intends to deploy e-passport readers to all ports of entry in the future, it has not articulated clear timeframes to do so. Therefore, until this is achieved, it will not be possible for DHS to read the information on the chips embedded in the passports for the remaining ports of entry. A key risk in the Visa Waiver Program is stolen blank passports from visa waiver countries, because detecting these passports at U.S. ports of entry is extremely difficult, according to DHS. Some thefts of blank passports have not been reported to the United States until years after the fact, according to DHS intelligence reports. For example, in 2004, a visa waiver country reported the theft of nearly 300 stolen blank passports to the United States—more than 9 years after the theft occurred. In addition, in 2004, a visa waiver country reported the theft of 270 blank passports more than 8 months after the theft occurred. The 2002 Enhanced Security and Visa Entry Reform Act provides that the Secretary of Homeland Security must terminate a country from the Visa Waiver Program if he and the Secretary of State jointly determine that the country is not reporting the theft of its blank passports to the United States on a timely basis. DHS and State have chosen not to terminate from the program countries that have failed to report these incidents. DHS officials told us that the inherent political, economic, and diplomatic implications associated with removing a country from the Visa Waiver Program make it difficult to enforce the statutory requirement in the broadest terms. Moreover, DHS has not established time frames or operating procedures to enforce this requirement. In April 2004, the DHS OIG recommended that the then-Under Secretary for Border and Transportation Security, in coordination with State, develop standard operating procedures for the routine and proactive collection of stolen passport information from host governments for dissemination to U.S. agencies. While the statute requires visa waiver countries to certify that they report information on the theft of their blank passports to the United States on a timely basis, as of June 2006, DHS has not defined what constitutes “timely.” Moreover, the United States lacks a centralized mechanism for foreign governments to report all stolen passports. In particular, DHS has not defined to whom in the U.S. government participating countries should report this information. In addition to blank passports, lost or stolen issued passports also pose a risk because they can be altered. In June 2005, DHS issued guidance to participating Visa Waiver Program countries requiring that they certify their intent to report lost and stolen passport data on issued passports by August 2005. However, DHS has not yet issued guidance on what information must be shared, with whom, and within what time frame. Some visa waiver countries have not yet agreed to provide this information to the United States, due in part to concerns over the privacy of their citizens’ biographical information. In addition, several consular officials expressed confusion about the current and impending requirements about sharing this data, and felt they were unable to adequately explain the requirements to their foreign counterparts. In June 2005, the U.S. government also announced its intention to require visa waiver countries to certify their intent to report information on both lost and stolen blank and issued passports to Interpol. In 2002, Interpol developed a database of lost and stolen travel documents to which its member countries may contribute on a voluntary basis. The United States has endorsed Interpol’s database, and, since May 2004, State has been contributing U.S. data from lost and stolen blank passports to it. In 2005, State reported to Congress that it also instructed all U.S. embassies and consulates to take every opportunity to persuade host governments to share this data with Interpol. While most visa waiver countries use and contribute to Interpol’s database, four do not. Moreover, some countries that do contribute do not do so on a regular basis, according to Interpol officials. Interpol stated that it continues to encourage countries to send this information more systematically. In addition, participating countries have expressed concerns about reporting this information, citing privacy issues; however, Interpol’s database on lost and stolen travel documents does not include the passport bearers’ biographical information, such as name and date of birth. According to the Secretary General of Interpol, in light of the high value associated with passports from visa waiver countries, it is a priority for his agency to encourage these countries to contribute to the database. Though information from Interpol’s database could potentially stop inadmissible travelers from entering the United States, CBP’s border inspectors do not have automatic access to the database at primary inspection at U.S. ports of entry—the first line of defense against those who might exploit the Visa Waiver Program to enter the United States. The inspection process at U.S. ports of entry can include two stages—a primary and secondary inspection. If, during the primary inspection, the inspector suspects that the traveler is inadmissible either because of a fraudulent passport or other reason, the inspector refers the traveler to secondary inspection. At secondary inspection, border inspectors can contact officials at the National Targeting Center, who can query Interpol’s stolen-travel-document database to determine if the traveler is attempting to enter the United States with a passport that had been previously reported lost or stolen, but is not yet on CBP’s watch list (see fig. 5). However, Interpol’s data on lost and stolen travel documents is not automatically accessible to border inspectors at primary inspection—one reason why it is not currently an effective border screening tool, according to DHS, State, and Justice officials. According to the Secretary General of Interpol, until DHS can automatically query Interpol’s data, the United States will not have an effective screening tool for checking passports. According to Interpol officials, the United States is working actively with Interpol on a potential pilot project that would allow for an automatic query of aliens’ passport data against Interpol’s database at primary inspection at U.S. ports of entry. However, DHS has not yet finalized a plan to do so. In December 2005, Interpol began a similar program at all border stations in Switzerland. Through this program, Swiss border agents query Interpol’s database as soon as travelers appear at a border station. According to the Secretary General of Interpol, in a 2-month period, Switzerland encountered 282 potential instances of travelers attempting to enter the country with a previously reported lost or stolen passport. In addition, during this time frame, Swiss border agents queried Interpol’s database more than all other member countries combined because it was the only country accessing the database automatically. In commenting on a draft of this report, Justice’s Interpol-U.S. National Central Bureau stated that from April through June 2006, Justice, CBP’s National Targeting Center, and Interpol compared records from certain passengers arriving in the United States against Interpol’s lost and stolen travel document database. According to the National Central Bureau, the test’s objectives were to simulate an automatic query of passenger records against Interpol’s database and analyze discrepancies between that database and U.S. watch lists. The National Central Bureau stated that, by early August 2006, it and the National Targeting Center will finalize a report on this test to help facilitate a pilot program for real-time, systematic queries of passenger records against Interpol’s data at U.S. ports of entry. The Visa Waiver Program aims to facilitate international travel for millions of people each year and promote the effective use of government resources. Effective oversight of the program entails balancing these benefits against the program’s potential risks. To find this balance, the U.S. government needs to fully identify the vulnerabilities posed by visa waiver travelers, and be in a position to mitigate them. However, we found weaknesses in the process by which the U.S. government assesses these risks, and DHS’s Visa Waiver Program Oversight Unit is not able to manage the program with its current resource levels. Moreover, DHS has not communicated clear reporting requirements for lost and stolen passports—a key risk—nor can it automatically access all stolen passport information when it is most needed—namely, at the primary inspection point at U.S. points of entry. It is imperative that DHS commit to strengthen its ability to promptly identify and mitigate risks to ensure that the Visa Waiver Program does not jeopardize U.S. security interests. To improve the U.S. government’s process for assessing risks in the Visa Waiver Program, we recommend that the Secretary of Homeland Security, in coordination with State and other appropriate agencies, take the following five actions: Provide additional resources to strengthen OIE’s visa waiver monitoring unit. Finalize clear, consistent, and transparent protocols for the biennial country assessments and provide these protocols to stakeholders at relevant agencies at headquarters and overseas. These protocols should provide timelines for the entire assessment process, including the role of a site visit, an explanation of the clearance process, and deadlines for completion. Create real-time monitoring arrangements, including the identification of visa-waiver points of contact at U.S. embassies, for all 27 participating countries; and establish protocols, in coordination with appropriate headquarters offices, for direct communication between points of contact at overseas posts and OIE’s Visa Waiver Program Oversight Unit. Require periodic updates from points of contact at posts in countries where there are law enforcement or security concerns relevant to the Visa Waiver Program. Provide complete copies of the most recent country assessments to relevant stakeholders in headquarters and overseas posts. To improve the U.S. government’s process for mitigating the risks in the Visa Waiver Program, we recommend that the Secretary of Homeland Security, in coordination with State and other appropriate agencies, take the following three actions: Require that all visa waiver countries provide the United States and Interpol with non-biographical data from lost or stolen issued passports, as well as from blank passports. Develop and communicate clear standard operating procedures for the reporting of lost and stolen blank and issued passport data, including a definition of timely reporting and to whom in the U.S. government countries should report. Develop and implement a plan to make Interpol’s stolen travel document database automatically available during primary inspection at U.S. ports of entry. The May 2002 Enhanced Border Security and Visa Entry Reform Act mandated DHS to conduct country assessments of the effect on U.S. law enforcement and security interests of each country’s continued participation in the Visa Waiver Program at least every 2 years. Given the lengthy time it took for DHS to issue the November 2005 summary report to Congress, and to ensure future reports contain timely information when issued, Congress should consider establishing a biennial deadline by which DHS must complete the country assessments and report to Congress. DHS, State, and Interpol provided written comments on a draft of this report (see apps. IV, V, and VI). DHS, State, Interpol, and Justice’s Interpol- U.S. National Central Bureau provided technical comments, which we incorporated into the report, as appropriate. DHS either agreed with, or stated that it is considering, all of our recommendations. Regarding our matter for congressional consideration, DHS did not appear to support the establishment of a deadline for the biennial report to Congress. Instead, DHS suggested that Congress should require continuous and ongoing evaluation. With continuous review, DHS stated that it would be able to constantly evaluate U.S. interests and report to Congress on the current 2-year reporting cycle on targeted issues of concern, rather than providing a historical evaluation. We agree that continuous and ongoing evaluation is necessary, and that is why we recommended that DHS create real-time monitoring arrangements and provide additional resources to the Visa Waiver Program Oversight Unit to achieve this goal. Regarding the mandated biennial country assessments, we believe that they can serve a useful purpose if they are completed in a timely fashion. In addition, DHS provided information on actions that it has taken to improve the management of the biennial country assessment process. State agreed that efforts by U.S. embassies and consulates to monitor and assess the Visa Waiver Program would benefit from enhanced communication to and from DHS, and endorsed our recommendation that DHS provide more information to these stakeholders on Visa Waiver Program issues. In addition, State acknowledged the risk of misuse of previously lost or stolen passports, particularly by persons who are not eligible for a visa. With regard to timely reporting on lost and stolen passports, State welcomed our recommendation calling for clear guidelines and reporting mechanisms to achieve this goal. Interpol provided information about its lost and stolen travel document database and tools that it has developed to allow law enforcement officers to instantly check this database at airports and other border entry points. In addition, Interpol noted that many developing countries lack the resources necessary to implement these tools. Therefore, Interpol urged the United States and other countries to provide funding to facilitate access for all countries to its lost and stolen travel document database. It also provided its views on the risks associated with lost and stolen passports. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will provide copies of this report to the Secretaries of State and Homeland Security, as well as the Attorney General and the Secretary General of Interpol. We will also make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4128 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VII. To describe the benefits of the Visa Waiver Program, we reviewed relevant documentation, including Office of Inspector General (OIG) reports and our 2002 report on the implications of eliminating the program. We also interviewed political, economic, consular, commercial, and law enforcement officials at U.S. embassies overseas to discuss the advantages of the program for U.S. business and tourism. To describe the risks in the Visa Waiver Program, we examined documentation on the screening process at U.S. ports of entry for travelers from Visa Waiver Program countries. In addition, we analyzed data from Customs and Border Protection (CBP) on interceptions of fraudulent, lost, or stolen passports from participating countries. We also observed fraudulent document detection training of CBP agents at the Federal Law Enforcement Training Center and spoke with training officials regarding the difficulty in detecting fraudulent passports. We also interviewed officials from the Department of Homeland Security’s (DHS) National Targeting Center, Intelligence and Analysis Directorate, and Forensic Document Laboratory on the risks posed by Visa Waiver Program travelers. In particular, we analyzed data on the number of nonimmigrants that entered the United States under the Visa Waiver Program from fiscal years 2002 though 2004. While we did not fully assess the reliability of these statistics because we used them for background purposes, we conducted interviews and obtained other corroborating evidence that confirmed the importance of the Visa Waiver Program in terms of the broad numbers of admissions to the United States in recent years. Regarding DHS’s data on fraudulent passports, DHS reported that these data are limited to those cases in which the fraudulent document from the Visa Waiver Program country was intercepted at a port of entry, and do not include instances when fraudulent passports were used to enter the United States but were not detected. While we could not fully assess the reliability of the data, we found them sufficiently reliable to establish that hundreds of fraudulent documents from a broad range of Visa Waiver Program countries were intercepted in 2005. In addition, the number of documents that DHS reports by country is not necessarily indicative of the extent of the problem in that country, as these data only cover instances when fraudulent documents were intercepted. To evaluate the U.S. government’s efforts to assess and mitigate these risks, we analyzed the laws governing the program, relevant agency operating procedures, and DHS OIG reports. We also examined 15 of the 25 completed reports assessing the participation of Visa Waiver Program countries. As of June 2006, the remaining 10 assessments were pending classification review by DHS’s Office of International Enforcement. These assessments contained, among other things, detailed analyses of an individual country’s political, social, and economic conditions; security over its passport and national identity documents; immigration and nationality laws, law enforcement policies and practices, and other matters relevant to law enforcement, immigration, and national security; patterns of passport fraud, visa fraud, and visa abuse; assessments of terrorism, by the country’s nationals, within or outside the evaluations of the impact of the country’s participation in the Visa Waiver Program on U.S. national security and law enforcement. To discuss these assessments and actions taken in response to their findings, we met with officials from several DHS component agencies and offices, the Department of State’s Bureau of Consular Affairs and its Europe and Eurasia Bureau, and the International Criminal Police Organization (Interpol) in Lyon, France. In addition, we met with officials from the Department of Justice’s U.S. National Central Bureau in Washington, D.C., which facilitates international law enforcement cooperation among the United States and Interpol and its other member countries. We also spoke with U.S. Embassy officials in six Visa Waiver Program countries, as well as foreign government officials in three of these countries. During these visits, we observed visa operations and interviewed embassy management, consular staff, and representatives from law enforcement agencies regarding their roles and responsibilities in overseeing the Visa Waiver Program. We conducted our evaluation from September 2005 through June 2006 in accordance with generally accepted government auditing standards. 1. We understand that DHS’s organizational structure changed during the 2004 review process. To avoid confusion regarding the DHS units that had some involvement in this process, we have replaced references to the Office of International Enforcement (OIE) with DHS, as appropriate. 2. Our review focused on the 2004 biennial review process for 25 of the 27 Visa Waiver Program countries. We agree that DHS has taken some steps to improve the 2005 process for Italy and Portugal, whose reviews are still in process, and we discuss these improvements in our report. However, we disagree that DHS has corrected most of the problems associated with the 2004 review process. As we note in our report, as of June 2006, DHS had neither updated the interagency working group team members on the status of the reviews of Italy and Portugal, nor provided them with a timeline for proceeding with the review. Furthermore, stakeholders continued to express concern about DHS’s lack of communication about the process and the findings. Therefore, we recommended that DHS finalize clear, consistent, and transparent protocols for biennial country assessments and provide these protocols to stakeholders at relevant agencies at headquarters and overseas. 3. We did not intend to suggest that the evaluation of U.S. security and law enforcement interests needed to be conducted or finalized during the in-country site visits. Our point is that important events may take place while the country assessments are in the clearance process. We believe that DHS should update the country assessments to reflect these events, such as large scale thefts of blank passports like the May 2005 theft that we noted in our report, to ensure that Congress has a comprehensive analysis of the current law enforcement and security risks posed by each country. 4. We agree that DHS cannot continue to incorporate data indefinitely into the country assessments. However, as we reported, the teams collecting information about the visa waiver countries’ risks in 2004 used, in some cases, information from 2 years prior; by the time the summary report was issued in November 2005, some of the data was more than 3 years old. Indeed, as DHS noted elsewhere, the 2004 country assessments provided a “rearview mirror” and “backward- looking” evaluation. Thus, the assessments may not necessarily have contained the best information available at the time the assessments were finalized. Given the lengthy time it took for DHS to issue the November 2005 summary report to Congress, and to ensure future reports contain timely information when issued, we believe that Congress should consider establishing a biennial deadline by which DHS must complete the country assessments and report to Congress. 5. We agree that continuous and ongoing evaluations of Visa Waiver Program countries are needed and recommended that DHS create real- time monitoring arrangements and provide additional resources to the Visa Waiver Program Oversight Unit to achieve this goal. However, as long as DHS is required to report biennially to Congress, DHS should ensure that future reviews are conducted in a timely fashion. Based on our review of the 2004 country assessment process, the assessments may not necessarily have contained the best information available at the time the assessments were finalized given the lengthy time it took for DHS to finalize its reviews. 6. We agree that it is in the U.S. government’s best interest to engage with countries on global concerns identified during the course of the country assessment process. It is not our intention to inhibit this kind of consultation. Furthermore, we acknowledge that a consultative process may involve tradeoffs between timely reporting and complete information gathering and analysis. Our concern is that key stakeholders in headquarters and at overseas posts, as well as members of the in-country site visit teams, expressed concerns about their roles in the 2004 country assessment process, and stated that they had not received enough detail from DHS about the process and the findings. Thus, we recommended that DHS provide transparent protocols to all stakeholders that provide timelines for the entire assessment process, including the role of a site visit, an explanation of the clearance process, and deadlines for completion. We believe it is important that DHS finalize its standard operating procedures, and share these procedures with relevant stakeholders at headquarters and overseas. As we noted in our report, due to the lack of outreach and clear communication about its mission, OIE is limited in its ability to monitor the day-to-day law enforcement and security concerns posed by the Visa Waiver Program, and the U.S. government is limited in its ability to influence visa waiver countries’ progress in meeting requirements. 7. We strongly agree that classified and sensitive information should be protected. However, we also believe that cleared U.S. officials at overseas posts in Visa Waiver Program countries, including ambassadors and deputy chiefs of mission, have a need to know the extent to which law enforcement and security concerns were identified during the mandated biennial reviews, and should receive copies of the final country assessments. Without the appropriate information, such as was contained in the assessments, embassy officials can not be effective agents for the U.S. government with regard to these issues. We believe that the establishment of a classified sharing system that allows U.S. government agencies to access the country assessments is a positive step. Jess T. Ford, (202) 512-4128 or [email protected]. In addition to the contact named above, John Brummet, Assistant Director; Kathryn H. Bernet, Joseph C. Brown, Joseph Carney, Richard Hung, Jane S. Kim, Mary Moutsos, and Jena Sinkfield made key contributions to this report.
The Visa Waiver Program enables citizens of 27 countries to travel to the United States for tourism or business for 90 days or less without obtaining a visa. In fiscal year 2004, more than 15 million people entered the country under the program. After the September 11, 2001, terrorist attacks, the risks that aliens would exploit the program to enter the United States became more of a concern. In this report, we (1) describe the Visa Waiver Program's benefits and risks, (2) examine the U.S. government's process for assessing potential risks, and (3) assess actions taken to mitigate these risks. We met with U.S. embassy officials in six program countries, and reviewed relevant laws, procedures, and reports on participating countries. The Visa Waiver Program has many benefits as well as some inherent risks. It facilitates travel for millions of people and eases consular workload, but poses challenges to border inspectors, who, when screening visa waiver travelers, may face language barriers or lack time to conduct in-depth interviews. Furthermore, stolen passports from visa waiver countries are prized travel documents among terrorists, criminals, and immigration law violators, creating an additional risk. While the Department of Homeland Security (DHS) has intercepted many fraudulent documents at U.S. ports of entry, DHS officials acknowledged that an undetermined number of inadmissible aliens may have entered the United States using a stolen or lost passport from a visa waiver country. The U.S. government's process for assessing the risks of the Visa Waiver Program has weaknesses. In 2002, Congress mandated that, every 2 years, DHS review the effect that each country's continued participation in the program has on U.S. law enforcement and security interests, but did not set a reporting deadline. In 2004, DHS established a unit to oversee the program and conduct these reviews. We identified several problems with the 2004 review process, as key stakeholders were not consulted during portions of the process, preparation for the in-country site visits was not consistent, and the final reports were untimely. Furthermore, DHS cannot effectively achieve its mission to monitor and report on ongoing law enforcement and security concerns in visa waiver countries due to insufficient resources. DHS has taken some actions to mitigate the program's risks; however, the U.S. government has faced difficulties in further mitigating these risks. In particular, the department has not established time frames and operating procedures regarding timely stolen passport reporting--a program requirement since 2002. Furthermore, DHS has sought to require the reporting of lost and stolen passport data to the United States and the International Criminal Police Organization (Interpol), but it has not issued clear reporting guidelines to participating countries. While most visa waiver countries participate with Interpol's database, four do not. DHS is not using Interpol's data to its full potential as a border screening tool because DHS does not automatically access the data at primary inspection.
The final regulations establish a new human capital system for DHS that is intended to assure its ability to attract, retain, and reward a workforce that is able to meet its critical mission. Further, the human capital system is to provide for greater flexibility and accountability in the way employees are to be paid, developed, evaluated, afforded due process, and represented by labor organizations while reflecting the principles of merit and fairness embodied in the statutory merit systems principles. Predictable with any change management initiative, the DHS regulations have raised some concerns among employee groups, unions, and other stakeholders because they do not have all the details of how the system will be implemented and impact them. We have reported that individuals inevitably worry during any change management initiative because of uncertainty over new policies and procedures. A key practice to address this worry is to involve employees and their representatives to obtain their ideas and gain their ownership for the initiative. Thus, a significant improvement from the proposed regulations is that now employee representatives are to be provided with an opportunity to remain involved. Specifically, they can discuss their views with DHS officials and/or submit written comments as implementing directives are developed, as outlined under the “continuing collaboration” provisions. This collaboration is consistent with DHS’s statutory authority to establish a new human capital system, which requires such continuing collaboration. Under the regulations, nothing in the continuing collaboration process is to affect the right of the Secretary to determine the content of implementing directives and to make them effective at any time. In addition, the final regulations state that DHS is to establish procedures for evaluating the implementation of its human capital system. High- performing organizations continually review and revise their human capital management systems based on data-driven lessons learned and changing needs in the environment. Collecting and analyzing data is the fundamental building block for measuring the effectiveness of these systems in support of the mission and goals of the agency. We continue to believe that many of the basic principles underlying the DHS regulations are generally consistent with proven approaches to strategic human capital management. Today, I will provide our observations on the following elements of DHS’s human capital system as outlined in the final regulations—pay and performance management, adverse actions and appeals, and labor-management relations. Last year, we testified that the DHS proposal reflects a growing understanding that the federal government needs to fundamentally rethink its current approach to pay and better link pay to individual and organizational performance. To this end, the DHS proposal takes another valuable step towards modern performance management. Among the key provisions is a performance-oriented and market-based pay system. We have observed that a competitive compensation system can help organizations attract and retain a quality workforce. To begin to develop such a system, organizations assess the skills and knowledge they need; compare compensation against other public, private, or nonprofit entities competing for the same talent in a given locality; and classify positions along levels of responsibility. While one size does not fit all, organizations generally structure their competitive compensation systems to separate base salary—which all employees receive—from other special incentives, such as merit increases, performance awards, or bonuses, which are provided based on performance and contributions to organizational results. According to the final regulations, DHS is to establish occupational clusters and pay bands that replace the current General Schedule (GS) system now in place for much of the civil service. DHS may, after coordination with OPM, establish occupational clusters based on factors such as mission or function, nature of work, qualifications or competencies, career or pay progression patterns, relevant labor-market features, and other characteristics of those occupations or positions. DHS is to document in implementing directives the criteria and rationale for grouping occupations or positions into clusters as well as the definitions for each band’s range of difficulty and responsibility, qualifications, competencies, or other characteristics of the work. As we testified last year, pay banding and movement to broader occupational clusters can both facilitate DHS’s movement to a pay for performance system and help DHS to better define occupations, which can improve the hiring process. We have reported that the current GS system as defined in the Classification Act of 1949 is a key barrier to comprehensive human capital reform and the creation of broader occupational job clusters and pay bands would aid other agencies as they seek to modernize their personnel systems.Today’s jobs in knowledge-based organizations require a much broader array of tasks that may cross over the narrow and rigid boundaries of job classifications of the GS system. Under the final regulations, DHS is to convert employees from the GS system to the new system without a reduction in their current pay. According to DHS, when employees are converted from the GS system to a pay band, their base pay is to be adjusted to include a percentage of their next within-grade increase, based on the time spent in their current step and the waiting period for the next step. DHS stated that most employees would receive a slight increase in salary upon conversion to a pay band. This approach is consistent with how several of OPM’s personnel demonstration projects converted employees from the GS system. The final DHS regulations include other elements of a modern compensation system. For example, the regulations provide that DHS may, after coordination with OPM, set and adjust the pay ranges for each pay band taking into account mission requirements, labor market conditions, availability of funds, pay adjustments received by other federal employees, and any other relevant factors. In addition, DHS may, after coordination with OPM, establish locality rate supplements for different occupational clusters or for different bands within the same cluster in the same locality pay area. According to DHS, these locality rates would be based on the cost of labor rather than cost of living factors. The regulations state that DHS would use recruitment or retention bonuses if it experiences such problems due to living costs in a particular geographic area. Especially when developing a new performance management system, high- performing organizations have found that actively involving employees and key stakeholders, such as unions or other employee associations, helps gain ownership of the system and improves employees’ confidence and belief in the fairness of the system. DHS recognized that the system must be designed and implemented in a transparent and credible manner that involves employees and employee representatives. A new and positive addition to the final regulations is a Homeland Security Compensation Committee that is to provide oversight and transparency to the compensation process. The committee—consisting of 14 members, including four officials of labor organizations—is to develop recommendations and options for the Secretary’s consideration on compensation and performance management matters, including the annual allocation of funds between market and performance pay adjustments. While the DHS regulations contain many elements of a performance-based and market-oriented pay system, there are several issues that we identified last year that DHS will need to continue to address as it moves forward with the implementation of the system. These issues include linking organizational goals to individual performance, using competencies to provide a fuller assessment of performance, making meaningful distinctions in employee performance, and continuing to incorporate adequate safeguards to ensure fairness and guard against abuse. Consistent with leading practice, the DHS performance management system is to align individual performance expectations with the mission, strategic goals, organizational program and policy objectives, annual performance plans, and other measures of performance. DHS’s performance management system can be a vital tool for aligning the organization with desired results and creating a “line of sight” showing how team, unit, and individual performance can contribute to overall organizational results. However, as we testified last year, agencies struggle to create this line of sight. DHS appropriately recognizes that given its vast diversity of work, managers and employees need flexibility in crafting specific performance expectations for their employees. These expectations may take the form of competencies an employee is expected to demonstrate on the job, among other things. However, as DHS develops its implementing directives, the experiences of leading organizations suggest that DHS should reconsider its position to merely allow, rather than require, the use of core competencies that employees must demonstrate as a central feature of its performance management system. Based on our review of others’ efforts and our own experience at GAO, core competencies can help reinforce employee behaviors and actions that support the department’s mission, goals, and values and can provide a consistent message to employees about how they are expected to achieve results. For example, an OPM personnel demonstration project—the Civilian Acquisition Workforce Personnel Demonstration Project—covers various organizational units within the Department of Defense and applies core competencies for all employees, such as teamwork/cooperation, customer relations, leadership/supervision, and communication. Similarly, as we testified last year, DHS could use competencies—such as achieving results, change management, cultural sensitivity, teamwork and collaboration, and information sharing—to reinforce employee behaviors and actions that support its mission, goals, and values and to set expectations for individuals’ roles in DHS’s transformation. By including such competencies throughout its performance management system, DHS could create a shared responsibility for organizational success and help assure accountability for change. High-performing organizations seek to create pay, incentive, and reward systems that clearly link employee knowledge, skills, and contributions to organizational results. These organizations make meaningful distinctions between acceptable and outstanding performance of individuals and appropriately reward those who perform at the highest level. The final regulations state that DHS supervisors and managers are to be held accountable for making meaningful distinctions among employees based on performance, fostering and rewarding excellent performance, and addressing poor performance. While DHS states that as a general matter, pass/fail ratings are incompatible with pay for performance, it is to permit use of pass/fail ratings for employees in the “Entry/Developmental” band or in other pay bands under extraordinary circumstances as determined by the Secretary. DHS is to require the use of a least three summary rating levels for other employee groups. We urge DHS to consider using at least four summary rating levels to allow for greater performance rating and pay differentiation. This approach is in the spirit of the new governmentwide performance-based pay system for the Senior Executive Service (SES), which requires at least four levels to provide a clear and direct link between SES performance and pay as well as to make meaningful distinctions based on relative performance. Cascading this approach to other levels of employees can help DHS recognize and reward employee contributions and achieve the highest levels of individual performance. As DHS develops its implementing directives, it also needs to continue to build safeguards into its performance management system. A concern that employees often express about any pay for performance system is supervisors’ ability to assess performance fairly. Using safeguards, such as having an independent body to conduct reasonableness reviews of performance management decisions, can help to allay these concerns and build a fair, credible, and transparent system. It should be noted that the final regulations no longer provide for a Performance Review Board (PRB) to review ratings in order to promote consistency, provide general oversight of the performance management system, and ensure it is administered in a fair, credible, and transparent manner. According to the final regulations, participating labor organizations expressed concern that the PRBs could delay pay decisions and give the appearance of unwarranted interference in the performance rating process. However, in the final regulations, DHS states that it continues to believe that an oversight mechanism is important to the credibility of the department’s pay for performance system and that the Compensation Committee, in place of PRBs, is to conduct an annual review of performance payout summary data. While much remains to be determined about how the Compensation Committee is to operate, we believe that the effective implementation of such a committee is important to assuring that predecisional internal safeguards exist to help achieve consistency and equity, and assure non-discrimination and non- politicization of the performance management process. We have also reported that agencies need to assure reasonable transparency and provide appropriate accountability mechanisms in connection with the results of the performance management process. For DHS, this can include publishing internally the overall results of performance management and individual pay decisions while protecting individual confidentiality and reporting periodically on internal assessments and employee survey results relating to the performance management system. Publishing this information can provide employees with the information they need to better understand the performance management system and to generally compare their individual performance with their peers. We found that several of OPM’s personnel demonstration projects publish information for employees on internal Web sites that include the overall results of performance appraisal and pay decisions, such as the average performance rating, the average pay increase, and the average award for the organization and for each individual unit. DHS’s final regulations are intended to simplify and streamline the employee adverse action process to provide greater flexibility for the department and to minimize delays, while also ensuring due process protections. It is too early to tell what impact, if any, these regulations would have on DHS’s operations and employees or other entities, such as the Merit Systems Protection Board (MSPB). Close monitoring of any unintended consequences, such as on MSPB and its ability to manage cases from DHS and other federal agencies, is warranted. In terms of adverse actions, the regulations modify the current federal system in that the DHS Secretary will have the authority to identify specific offenses for which removal is mandatory. In our previous testimony on the proposed regulations, we expressed some caution about this new authority and pointed out that the process for determining and communicating which types of offenses require mandatory removal should be explicit and transparent. We noted that such a process should include an employee notice and comment period before implementation and collaboration with relevant congressional stakeholders and employee representatives. The final DHS regulations explicitly provide for publishing a list of the mandatory removal offenses in the Federal Register and in DHS’s implementing directives and making these offenses known to employees annually. In last year’s testimony, we also suggested that DHS exercise caution when identifying specific removable offenses and the specific punishment. When developing and implementing the regulations, DHS might learn from the experience of the Internal Revenue Service’s (IRS) implementation of its mandatory removal provisions. We reported that IRS officials believed this provision had a negative impact on employee morale and effectiveness and had a “chilling effect” on IRS frontline enforcement employees who were afraid to take certain appropriate enforcement actions. Careful drafting of each removable offense is critical to ensure that the provision does not have unintended consequences. Under the DHS regulations, employees alleged to have committed these mandatory removal offenses are to have the right to a review by a newly created panel. DHS regulations provide for judicial review of the panel’s decisions. Members of this three-person panel are to be appointed by the Secretary for three-year terms. In last year’s testimony, we noted that the independence of the panel that is to hear appeals of mandatory removal actions deserved further consideration. The final regulations address the issue of independence by prescribing additional qualification requirements which emphasize integrity and impartiality and requiring the Secretary to consider any lists of candidates submitted by union representatives for panel positions other than the chair. Employee perception concerning the independence of this panel is critical to the mandatory removal process. Regarding the appeal of adverse actions other than mandatory removals, the DHS regulations generally preserve the employee’s basic right to appeal decisions to an independent body—MSPB—but with procedures different from those applicable to other federal employees. However, in a change from the proposed regulations in taking actions against employees for performance or conduct issues, DHS is to meet a higher standard of evidence—a “preponderance of evidence” instead of “substantial evidence.” For performance issues, while this higher standard of evidence means that DHS would face a greater burden of proof than most agencies to pursue these actions, DHS managers are not required to provide employees performance improvement periods, as is the case for other federal employees. For conduct issues, DHS would face the same burden of proof as most agencies. The regulations shorten the notification period before an adverse action can become effective and provide an accelerated MSPB adjudication process. In addition, MSPB may no longer modify a penalty for a conduct- based adverse action that is imposed on an employee by DHS unless such penalty was “wholly without justification.” The DHS regulations also stipulate that MSPB can no longer require that parties enter into settlement discussions, although either party may propose doing so. DHS expressed concerns that settlement should be a completely voluntary decision made by parties on their own. However, settling cases has been an important tool in the past at MSPB, and promotion of settlement at this stage should be encouraged. The final regulations continue to support a commitment to the use of Alternative Dispute Resolution (ADR), which we previously noted was a positive development. To resolve disputes in a more efficient, timely, and less adversarial manner, federal agencies have been expanding their human capital programs to include ADR approaches, including the use of ombudsmen as an informal alternative to addressing conflicts. ADR is a tool for supervisors and employees alike to facilitate communication and resolve conflicts. As we have reported, ADR helps lessen the time and the cost burdens associated with the federal redress system and has the advantage of employing techniques that focus on understanding the disputants’ underlying interests over techniques that focus on the validity of their positions. For these and other reasons, we believe that it is important to continue to promote ADR throughout the process. Under the DHS regulations, the scope and method of labor union involvement in human capital issues are to change. DHS management is no longer required to engage in collective bargaining and negotiations on as many human capital policies and processes as in the past. For example, certain actions that DHS has determined are critical to the mission and operations of the department, such as deploying staff and introducing new technologies, are now considered management rights and are not subject to collective bargaining and negotiation. DHS, however, is to confer with employees and unions in developing the procedures it will use to take these actions. Other human capital policies and processes that DHS characterizes as “non-operational,” such as selecting, promoting, and disciplining employees, are also not subject to collective bargaining, but DHS must negotiate the procedures it will use to take these actions. Finally, certain other policies and processes, such as how DHS will reimburse employees for any “significant and substantial” adverse impacts resulting from an action, such as a rapid change in deployment, must be negotiated. In addition, DHS is to establish its own internal labor relations board—the Homeland Security Labor Relations Board—to deal with most agencywide labor relations policies and disputes rather than submit them to the Federal Labor Relations Authority. DHS stated that the unique nature of its mission—homeland protection—demands that management have the flexibility to make quick resource decisions without having to negotiate them, and that its own internal board would better understand its mission and, therefore, be better able to address disputes. Labor organizations are to nominate names of individuals to serve on the Board and the regulations established some general qualifications for the board members. However, the Secretary is to retain the authority to both appoint and remove any member. Similar to the mandatory removal panel, employee perception concerning the independence of this board is critical to the resolution of the issues raised over labor relations policies and disputes. These changes have not been without controversy, and four federal employee unions have filed suit alleging that DHS has exceeded its authority under the statute establishing the DHS human capital system. The suit discusses bargaining and negotiability practices, adverse action procedures, and the roles of the Federal Labor Relations Authority and MSPB under the DHS regulations. Our previous work on individual agencies’ human capital systems has not directly addressed the scope of specific issues that should or should not be subject to collective bargaining and negotiations. At a forum we co-hosted exploring the concept of a governmentwide framework for human capital reform, which I will discuss later, participants generally agreed that the ability to organize, bargain collectively, and participate in labor organizations is an important principle to be retained in any framework for reform. It was also suggested at the forum that unions must be both willing and able to actively collaborate and coordinate with management if unions are to be effective representatives of their members and real participants in any human capital reform. With the issuance of the final regulations, DHS faces multiple challenges to the successful implementation of its new human capital system. We identified multiple implementation challenges at last year’s hearing. Subsequently, we reported that DHS’s actions to date in designing its human capital system and its stated plans for future work on its system are helping to position the department for successful implementation.Nevertheless, DHS was in the early stages of developing the infrastructure needed for implementing its new system. For more information on these challenges, as well as on related human capital topics, see the “Highlights” pages attached to this statement. We believe that these challenges are still critical to the success of the new human capital system. In many cases, DHS has acknowledged these challenges and made a commitment to address them in regulations. Today I would like to focus on two additional implementation challenges— ensuring sustained and committed leadership and establishing an overall consultation and communication strategy—and then reiterate challenges we previously identified, including providing adequate resources for implementing the new system and involving employees and other stakeholders in implementing the system. As DHS and other agencies across the federal government embark on large- scale organizational change initiatives, such as the new human capital system DHS is implementing, there is a compelling need to elevate, integrate, and institutionalize responsibility for such key functional management initiatives to help ensure their success. A Chief Operating Officer/Chief Management Officer (COO/CMO) or similar position can effectively provide the continuing, focused attention essential to successfully completing these multiyear transformations. Especially for such an endeavor as critical as DHS’s new human capital system, such a position would serve to elevate attention that is essential to overcome an organization’s natural resistance to change, marshal the resources needed to implement change, and build and maintain the organizationwide commitment to new ways of doing business; integrate this new system with various management responsibilities so they are no longer “stovepiped” and fit it into other organizational transformation efforts in a comprehensive, ongoing, and integrated manner; and institutionalize accountability for the system so that the implementation of this critical human capital initiative can be sustained. We have work underway at the request of Congress to assess DHS’s management integration efforts, including the role of existing senior leadership positions as compared to a COO/CMO position, and expect to issue a report on this work in the coming weeks. Another significant challenge for DHS is to assure an effective and ongoing two-way consultation and communication strategy that creates shared expectations about, and reports related progress on, the implementation of the new system. We have reported this is a key practice of a change management initiative. DHS’s final regulations recognize that all parties will need to make a significant investment in communication in order to achieve successful implementation of its new human capital system. According to DHS, its communication strategy will include global e-mails, satellite broadcasts, Web pages, and an internal DHS weekly newsletter. DHS stated that its leaders will be provided tool kits and other aids to facilitate discussions and interactions between management and employees on program changes. Given the attention over the regulations, a critical implementation step is for DHS to assure a communication strategy. Communication is not about just “pushing the message out.” Rather, it should facilitate a two-way honest exchange with, and allow for feedback from, employees, customers, and key stakeholders. This communication is central to forming the effective internal and external partnerships that are vital to the success of any organization. Creating opportunities for employees to communicate concerns and experiences about any change management initiative allows employees to feel that their experiences are acknowledged and important to management during the implementation of any change management initiative. Once this feedback is received, it is important to consider and use this solicited employee feedback to make any appropriate changes to its implementation. In addition, closing the loop by providing information on why key recommendations were not adopted is also important. OPM reports that the increased costs of implementing alternative personnel systems should be acknowledged and budgeted for up front. DHS estimates the overall costs associated with implementing the new DHS system—including the development and implementation of a new pay and performance system, the conversion of current employees to that system, and the creation of its new labor relations board—will be approximately $130 million through fiscal year 2007 (i.e., over a 4-year period) and less than $100 million will be spent in any 12-month period. We found that based on the data provided by selected OPM personnel demonstration projects, direct costs associated with salaries and training were among the major cost drivers of implementing their pay for performance systems. Certain costs, such as those for initial training on the new system, are one-time in nature and should not be built into the base of DHS’s budget. Other costs, such as employees’ salaries, are recurring and thus would be built into the base of DHS’s budget for future years. We found that approaches the demonstration projects used to manage salary costs were to consider fiscal conditions and the labor market and to provide a mix of one-time awards and permanent pay increases. For example, rewarding an employee’s performance with an award instead of an equivalent increase to base pay can reduce salary costs in the long run because the agency only has to pay the amount of the award one time, rather than annually. However, one approach that the demonstration projects used to manage costs that is not included in the final regulations is the use of “control points.” We found that the demonstration projects used such a mechanism—sometimes called speed bumps—to manage progression through the bands to help ensure that employees’ performance coincides with their salaries and prevent all employees from eventually migrating to the top of the band and thus increase costs. According to the DHS regulations, its performance management system is designed to incorporate adequate training and retraining for supervisors, managers, and employees in the implementation and operation of the system. Each of OPM’s personnel demonstration projects trained employees on the performance management system prior to implementation to make employees aware of the new approach, as well as periodically after implementation to refresh employee familiarity with the system. The training was designed to help employees understand their applicable competencies and performance standards; develop performance plans; write self-appraisals; become familiar with how performance is evaluated and how pay increases and awards decisions are made; and know the roles and responsibilities of managers, supervisors, and employees in the appraisal and payout processes. We reported in September 2003 that DHS’s and OPM’s effort to design a new human capital system was collaborative and facilitated participation of employees from all levels of the department.We recommended that the Secretary of DHS build on the progress that had been made and ensure that the communication strategy used to support the human capital system maximize opportunities for employee and key stakeholder involvement through the completion of design and implementation of the new system, with special emphasis on seeking the feedback and buy-in of frontline employees. In implementing this system, DHS should continue to recognize the importance of employee and key stakeholder involvement. Leading organizations involve employee unions, as well as involve employees directly, and consider their input in formulating proposals and before finalizing any related decisions. To this end, DHS’s revised regulations have attempted to recognize the importance of employee involvement in implementing the new personnel system. As we discussed earlier, the final DHS regulations provide for continuing collaboration in further development of the implementing directives and participation on the Compensation Committee. The regulations also provide that DHS is to involve employees in evaluations of the human capital system. Specifically, DHS is to provide designated employee representatives with the opportunity to be briefed and a specified timeframe to provide comments on the design and results of program evaluation. Further, employee representatives are to be involved at the identification of the scope, objectives, and methodology to be used in the program evaluation and in the review of draft findings and recommendations. DHS has recently joined some other federal departments and agencies, such as the Department of Defense, GAO, National Aeronautics and Space Administration, and the Federal Aviation Administration, in receiving authorities intended to help them manage their human capital strategically to achieve results. To help advance the discussion concerning how governmentwide human capital reform should proceed, GAO and the National Commission on the Public Service Implementation Initiative hosted a forum in April 2004 on whether there should be a governmentwide framework for human capital reform and, if so, what this framework should include. While there was widespread recognition among the forum participants that a one-size-fits-all approach to human capital management is not appropriate for the challenges and demands government faces, there was equally broad agreement that there should be a governmentwide framework to guide human capital reform. Further, a governmentwide framework should balance the need for consistency across the federal government with the desire for flexibility so that individual agencies can tailor human capital systems to best meet their needs. Striking this balance is not easy to achieve, but is necessary to maintain a governmentwide system that is responsive enough to adapt to agencies’ diverse missions, cultures, and workforces. While there were divergent views among the forum participants, there was general agreement on a set of principles, criteria, and processes that would serve as a starting point for further discussion in developing a governmentwide framework in advancing human capital reform, as shown in figure 1. As the momentum accelerates for human capital reform, GAO is continuing to work with others to address issues of mutual interest and concern. For example, to follow up on the April forum, the National Academy of Public Administration and the National Commission on the Public Service Implementation Initiative convened a group of human capital stakeholders to continue the discussion of a governmentwide framework. Summary Observations The final regulations that DHS has issued represent a positive step towards a more strategic human capital management approach for both DHS and the overall government, a step we have called for in our recent High-Risk Series. Consistent with our observations last year, DHS’s regulations make progress towards a modern classification and compensation system. DHS’s overall efforts in designing and implementing its human capital system can be particularly instructive for future human capital reform. Nevertheless, regarding the implementation of the DHS system, how it is done, when it is done, and the basis on which it is done can make all the difference in whether it will be successful. That is why it is important to recognize that DHS still has to fill in many of the details on how it will implement these reforms. These details do matter and they need to be disclosed and analyzed in order to fully assess DHS’s proposed reforms. We have made a number of suggestions for improvements the agency should consider in this process. It is equally important for the agency to ensure it has the necessary infrastructure in place to implement the system, not only an effective performance management system, but also the capabilities to effectively use the new human capital authorities and a strategic human capital planning process. This infrastructure should be in place before any new flexibilities are operationalized. DHS appears to be committed to continue to involve employees, including unions, throughout the implementation process, another critical ingredient for success. Specifically, under DHS’s final regulations, employee representatives or union officials are to have opportunities to participate in developing the implementing directives, as outlined under the “continuing collaboration” provisions; hold four membership seats on the Homeland Security Compensation Committee; and help in evaluations of the human capital system. A continued commitment to a meaningful and ongoing two- way consultation and communication strategy that allows for ongoing feedback from employees, customers, and key stakeholders is central to forming the effective internal and external partnerships that are vital to the success of DHS’s human capital system. It is critically important that these consultation and communication processes be meaningful in order to be both credible and effective. Finally, to help ensure the quality of that involvement, sustained leadership in a position such as a COO/CMO could help to elevate, integrate, and institutionalize responsibility for the success of DHS’s human capital system and other key business transformation initiatives. Mr. Chairman and Members of the Subcommittee, this concludes my prepared statement. I would be pleased to respond to any questions that you may have. For further information, please contact Eileen Larence, Acting Director, Strategic Issues, at (202) 512-6806 or [email protected]. Major contributors to this testimony include Michelle Bracy, K. Scott Derrick, Karin Fangman, Janice Latimer, Jeffrey McDermott, Naved Qureshi, Lisa Shames, and Michael Volpe. At the center of any agency transformation, such as the one envisioned for the Department of Homeland Security (DHS), are the people who will make it happen. Thus, strategic human capital management at DHS can help it marshal, manage, and maintain the people and skills needed to meet its critical mission. Congress provided DHS with significant flexibility to design a modern human capital management system. DHS and the Office of Personnel Management (OPM) have now jointly released the final regulations on DHS’s new human capital system. GAO believes that the regulations contain many of the basic principles that are consistent with proven approaches to strategic human capital management. For example, many elements for a modern compensation system—such as occupational clusters, pay bands, and pay ranges that take into account factors such as labor market conditions—are to be incorporated into DHS’s new system. However, these final regulations are intended to provide an outline and not a detailed, comprehensive presentation of how the new system will be implemented. Thus, DHS has considerable work ahead to define the details of the implementation of its system and understanding these details is important in assessing the overall system. Last year, with the release of the proposed regulations, GAO observed that many of the basic principles underlying the regulations were consistent with proven approaches to strategic human capital management and deserved serious consideration. However, some parts of the human capital system raised questions for DHS, OPM, and Congress to consider in the areas of pay and performance management, adverse actions and appeals, and labor management relations. GAO also identified multiple implementation challenges for DHS once the final regulations for the new system were issued. The implementation challenges we identified last year are still critical to the success of the new system. Also, DHS appears to be committed to continue to involve employees, including unions, throughout the implementation process. Specifically, according to the regulations, employee representatives or union officials are to have opportunities to participate in developing the implementing directives, hold four membership seats on the Homeland Security Compensation Committee, and help in the design and review the results of evaluations of the new system. Further, GAO believes that to help ensure the quality of that involvement, DHS will need to Ensure sustained and committed leadership. A Chief Operating Officer/Chief Management Officer or similar position at DHS would serve to elevate, integrate, and institutionalize responsibility for this critical endeavor and help ensure its success by providing the continuing, focused attention needed to successfully complete the multiyear conversion to the new human capital system. Establish an overall communication strategy. According to DHS, its planned communication strategy for its new human capital system will include global e-mails, satellite broadcasts, Web pages, and an internal DHS weekly newsletter. A key implementation step for DHS is to assure an effective and on-going two-way communication effort that creates shared expectations among managers, employees, customers, and stakeholders. This testimony provides preliminary observations on selected provisions of the final regulations. www.gao.gov/cgi-bin/getrpt?GAO-05-320T. To view the full product, including the scope and methodology, click on the link above. For more information, contact Eileen Larence at (202) 512-6806 or [email protected]. While GAO strongly supports human capital reform in the federal government, how it is done, when it is done, and the basis on which it is done can make all the difference in whether such efforts are successful. GAO’s implementation of its own human capital authorities, such as pay bands and pay for performance, could help inform other organizations as they design systems to address their human capital needs. The final regulations for DHS’s new system are especially critical because of the potential implications for related governmentwide reforms. The creation of the Department of Homeland Security (DHS) almost one year ago represents an historic moment for the federal government to fundamentally transform how the nation will protect itself from terrorism. DHS is continuing to transform and integrate a disparate group of agencies with multiple missions, values, and cultures into a strong and effective cabinet department. Together with this unique opportunity, however, also comes significant risk to the nation that could occur if this transformation is not implemented successfully. In fact, GAO designated this implementation and transformation as high risk in January 2003. The proposed human capital system is designed to be aligned with the department’s mission requirements and is intended to protect the civil service rights of DHS employees. Many of the basic principles underlying the DHS regulations are consistent with proven approaches to strategic human capital management, including several approaches pioneered by GAO, and deserve serious consideration. However, some parts of the system raise questions that DHS, OPM, and Congress should consider. Pay and performance management: The proposal takes another valuable step towards results-oriented pay reform and modern performance management. For effective performance management, DHS should use validated core competencies as a key part of evaluating individual contributions to departmental results and transformation efforts. Adverse actions and appeals: The proposal would retain an avenue for employees to appeal adverse actions to an independent third party. However, the process to identify mandatory removal offenses must be collaborative and transparent. DHS needs to be cautious about defining specific actions requiring employee removal and learn from the Internal Revenue Service’s implementation of its mandatory removal provisions. Labor relations: The regulations recognize employees’ right to organize and bargain collectively, but reduce areas subject to bargaining. Continuing to involve employees in a meaningful manner is critical to the successful operations of the department. Congress provided DHS with significant flexibility to design a modern human capital management system. GAO reported in September 2003 that the design effort to develop the system was collaborative and consistent with positive elements of transformation. Last Friday, the Secretary of DHS and the Director of the Office of Personnel Management (OPM) released for public comment draft regulations for DHS’s new human capital system. This testimony provides preliminary observations on selected major provisions of the proposed system. The subcommittees are also releasing Human Capital: Implementing Pay for Performance at Selected Personnel Demonstration Projects (GAO-04-83) at today’s hearing. Once DHS issues final regulations for the human capital system, it will be faced with multiple implementation challenges: DHS plans to implement the system using a phased approach, however, nearly half of DHS civilian employees are not covered by these regulations, including more than 50,000 Transportation Security Administration screeners. To help build a unified culture, DHS should consider moving all of its employees under a single performance management system framework. DHS noted that it estimates that about $110 million will be needed to implement the new system in its first year. While adequate resources for program implementation are critical to program success, DHS is requesting a substantial amount of funding that warrants close scrutiny by Congress. The proposed regulations call for comprehensive, ongoing evaluations. Continued evaluation and adjustments will help to ensure an effective and credible human capital system. www.gao.gov/cgi-bin/getrpt?GAO-04-479T. be used as a tool for identifying core competencies for staff for attracting, developing, evaluating, and rewarding contributions to mission accomplishment. To view the full testimony statement, click on the link above. For more information, contact J. Christopher Mihm at (202) 512-6806 or [email protected]. The analysis of DHS’s effort to develop a strategic human capital management system can be instructive as other agencies request and implement new strategic human capital management authorities. DHS was provided with significant flexibility to design a modern human capital management system. Its proposed system has both precedent-setting implications for the executive branch and far- reaching implications on how the department is managed. GAO reported in September 2003 that the effort to design the system was collaborative and consistent with positive elements of transformation. In February, March, and April 2004 we provided preliminary observations on the proposed human capital regulations. To date, DHS’s actions in designing its human capital management system and its stated plans for future work on the system are helping to position the department for successful implementation. Nonetheless, the department is in the early stages of developing the infrastructure needed for implementing its new human capital management system. DHS has begun strategic human capital planning efforts at the headquarters level since the release of the department’s overall strategic plan and the publication of proposed regulations for its new human capital management system. Strategic human capital planning efforts can enable DHS to remain aware of and be prepared for current and future needs as an organization. However, this will be more difficult because DHS has not yet been systematic or consistent in gathering relevant data on the successes or shortcomings of legacy component human capital approaches or current and future workforce challenges. Efforts are now under way to collect detailed human capital information and design a centralized information system so that such data can be gathered and reported at the departmentwide level. Congressional requesters asked GAO to describe the infrastructure necessary for strategic human capital management and to assess the degree to which DHS has that infrastructure in place, which includes an analysis of the progress DHS has made in implementing the recommendations from our September 2003 report. DHS and Office of Personnel Management leaders have consistently underscored their personal commitment to the design process. Continued leadership is necessary to marshal the capabilities required for the successful implementation of the department’s new human capital management system. Sustained and committed leadership is required on multiple levels: securing appropriate resources for the design, implementation, and evaluation of the human capital management system; communicating with employees and their representatives about the new system and providing opportunities for feedback; training employees on the details of the new system; and continuing opportunities for employees and their representatives to participate in the design and implementation of the system. DHS generally agreed with the findings of our report and provided more current information that we incorporated. However, DHS was concerned about our use of results from a governmentwide survey gathered prior to the formation of the department. We use this data because it is the most current information available on the perceptions of employees currently in DHS and helps to illustrate the challenges facing DHS. In its proposed regulations, DHS outlines its intention to implement key safeguards. For example, the DHS performance management system must comply with the merit system principles and avoid prohibited personnel practices; provide a means for employee involvement in the design and implementation of the system; and overall, be fair, credible, and transparent. The department also plans to align individual performance management with organizational goals and provide for reasonableness reviews of performance management decisions through its Performance Review Boards. www.gao.gov/cgi-bin/getrpt?GAO-04-790. To view the full product, including the scope and methodology, click on the link above. For more information, contact J. Christopher Mihm at (202) 512-6806 or [email protected]. The federal government is in a period of profound transition and faces an array of challenges and opportunities to enhance performance, ensure accountability, and position the nation for the future. High- performing organizations have found that to successfully transform themselves, they must often fundamentally change their cultures so that they are more results-oriented, customer-focused, and collaborative in nature. To foster such cultures, these organizations recognize that an effective performance management system can be a strategic tool to drive internal change and achieve desired results. Public sector organizations both in the United States and abroad have implemented a selected, generally consistent set of key practices for effective performance management that collectively create a clear linkage— “line of sight”—between individual performance and organizational success. These key practices include the following. 1. Align individual performance expectations with organizational goals. An explicit alignment helps individuals see the connection between their daily activities and organizational goals. 2. Connect performance expectations to crosscutting goals. Placing an emphasis on collaboration, interaction, and teamwork across organizational boundaries helps strengthen accountability for results. 3. Provide and routinely use performance information to track organizational priorities. Individuals use performance information to manage during the year, identify performance gaps, and pinpoint improvement opportunities. Based on previously issued reports on public sector organizations’ approaches to reinforce individual accountability for results, GAO identified key practices that federal agencies can consider as they develop modern, effective, and credible performance management systems. 4. Require follow-up actions to address organizational priorities. By requiring and tracking follow-up actions on performance gaps, organizations underscore the importance of holding individuals accountable for making progress on their priorities. 5. Use competencies to provide a fuller assessment of performance. Competencies define the skills and supporting behaviors that individuals need to effectively contribute to organizational results. 6. Link pay to individual and organizational performance. Pay, incentive, and reward systems that link employee knowledge, skills, and contributions to organizational results are based on valid, reliable, and transparent performance management systems with adequate safeguards. 7. Make meaningful distinctions in performance. Effective performance management systems strive to provide candid and constructive feedback and the necessary objective information and documentation to reward top performers and deal with poor performers. www.gao.gov/cgi-bin/getrpt?GAO-03-488. 8. Involve employees and stakeholders to gain ownership of performance management systems. Early and direct involvement helps increase employees’ and stakeholders’ understanding and ownership of the system and belief in its fairness. To view the full report, including the scope and methodology, click on the link above. For more information, contact J. Christopher Mihm at (202) 512-6806 or [email protected]. 9. Maintain continuity during transitions. Because cultural transformations take time, performance management systems reinforce accountability for change management and other organizational goals. There is widespread agreement that the federal government faces a range of challenges in the 21 century that it must confront to enhance performance, ensure accountability, and position the nation for the future. Federal agencies will need the most effective human capital systems to address these challenges and succeed in their transformation efforts during a period of likely sustained budget constraints. Forum participants discussed (1) Should there be a governmentwide framework for human capital reform? and (2) If yes, what should a governmentwide framework include? More progress in addressing human capital challenges was made in the last 3 years than in the last 20, and significant changes in how the federal workforce is managed are underway. There was widespread recognition that a “one size fits all” approach to human capital management is not appropriate for the challenges and demands government faces. However, there was equally broad agreement that there should be a governmentwide framework to guide human capital reform built on a set of beliefs that entail fundamental principles and boundaries that include criteria and processes that establish the checks and limitations when agencies seek and implement their authorities. While there were divergent views among the participants, there was general agreement that the following served as a starting point for further discussion in developing a governmentwide framework to advance needed human capital reform. On April 14, 2004, GAO and the National Commission on the Public Service Implementation Initiative hosted a forum with selected executive branch officials, key stakeholders, and other experts to help advance the discussion concerning how governmentwide human capital reform should proceed. To view the full product, including the scope and methodology, click on the link above. For more information, contact J. Christopher Mihm at (202) 512-6806 or [email protected]. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
People are critical to any agency transformation, such as the one envisioned for the Department of Homeland Security (DHS). They define an agency's culture, develop its knowledge base, and are its most important asset. Thus, strategic human capital management at DHS can help it marshal, manage, and maintain the people and skills needed to meet its critical mission. Congress provided DHS with significant flexibility to design a modern human capital management system. DHS and the Office of Personnel Management (OPM) have now jointly released the final regulations on DHS's new human capital system. Last year, with the release of the proposed regulations, GAO observed that many of the basic principles underlying the regulations were consistent with proven approaches to strategic human capital management and deserved serious consideration. However, some parts of the human capital system raised questions for DHS, OPM, and Congress to consider in the areas of pay and performance management, adverse actions and appeals, and labor management relations. GAO also identified multiple implementation challenges for DHS once the final regulations for the new system were issued. This testimony provides overall observations on DHS's intended human capital system and selected provisions of the final regulations. GAO believes that DHS's regulations contain many of the basic principles that are consistent with proven approaches to strategic human capital management. Positively, the final regulations provide for (1) a flexible, contemporary, performance-oriented, and market-based compensation system, including occupational clusters and pay bands; (2) continued involvement of employees and union officials throughout the implementation process, such as by participating in the development of the implementing directives and holding membership on the Homeland Security Compensation Committee; and (3) evaluations of the implementation of DHS's system. On the other hand, GAO has three areas of concern that deserve attention from DHS senior leadership. First, DHS has considerable work ahead to define the details of the implementation of its system and getting those details right will be critical to the success of the overall system. Second, the performance management system merely allows, rather than requires, the use of core competencies that can help to provide reasonable consistency and clearly communicate to employees what is expected of them. Third, the pass/fail ratings or three summary rating levels for certain employee groups do not provide the meaningful differentiation in performance needed for transparency to employees and for making the most informed pay decisions. Going forward, GAO believes that especially for this multiyear transformation, the Chief Operating Officer/Chief Management Officer concept could help to elevate, integrate, and institutionalize responsibility for the success of DHS's new human capital system and related implementation and transformation efforts. Second, a key implementation step for DHS is to assure an effective and on-going two-way communication effort that creates shared expectations among managers, employees, customers, and stakeholders. Last, DHS must ensure that it has the institutional infrastructure in place to make effective use of its new authorities. At a minimum, this infrastructure includes a human capital planning process that integrates human capital policies, strategies, and programs with its program goals, mission, and desired outcomes; the capabilities to effectively develop and implement a new human capital system; and importantly, the existence of a modern, effective, and credible performance management system that includes adequate safeguards to help assure consistency and prevent abuse. While GAO strongly supports federal human capital reform, how it is done, when it is done, and the basis on which it is done can be the difference between success and failure. Thus, the DHS regulations are especially critical because of their potential implications for related governmentwide reform.
Every 4 years, DOD is required to conduct and report on a comprehensive assessment—the Quadrennial Roles and Missions Review—of the roles and missions of the armed services and the core competencies and capabilities of DOD to perform and support such roles and missions. Specifically, the Chairman of the Joint Chiefs of Staff is to conduct an independent military assessment of the roles and missions of the armed forces, assignment of functions among the armed services, and any recommendations regarding issues that need to be addressed. The Secretary of Defense is then to identify the core mission areas of the armed services; the core competencies and capabilities associated with these mission areas; the DOD component responsible for providing the identified core competency or capability; any gaps in the ability of the component to provide the competency or any unnecessary duplication of competencies or capabilities between a plan for addressing any gaps or unnecessary duplication. The Secretary is then to submit a report on this Quadrennial Roles and Missions Review following the review and not later than the submission of the President’s budget for the next fiscal year; however, the statutory reporting requirement does not explicitly require that all required elements of the assessment be reported. The Quadrennial Roles and Missions Review that resulted in the July 2012 submission occurred amid a series of strategy and policy reviews that DOD has undertaken over the past 5 years. Some of these reviews resulted in specific strategy documents, such as the National Security Strategy, National Defense Strategy, National Military Strategy, and National Security Space Strategy. DOD is also required to conduct two reviews on a regular basis that relate to the Quadrennial Roles and Missions Review: the Quadrennial Defense Review and the Biennial Review of DOD Agencies and Field Activities. The timing requirements for the Quadrennial Roles and Missions Review and the Quadrennial Defense Review result in each Quadrennial Roles and Missions Review occurring 2 years before and 2 years after a Quadrennial Defense Review. In December 2010, DOD also reissued its internal DOD Directive 5100.01, which establishes the functions of DOD and its major components, and, in September 2011, released an update of the Unified Command Plan, which allocates responsibilities among the combatant commands. In addition to these recurring strategy reviews, comprehensive assessments, and updates to DOD guidance, DOD has recently completed two other reviews: the Defense Strategic Guidance, which identified the strategic interests of the United States, and the Strategic Choices Management Review, initiated by the Secretary of Defense in 2013 to inform DOD’s planning for declining future budgets. The Defense Strategic Guidance, released in January 2012, was directed by the President to identify the strategic interests of the United States. The document states that it was an assessment of the defense strategy prompted by the changing geopolitical environment and fiscal pressures. The Defense Strategic Guidance was developed by senior officials from DOD—including the Office of the Secretary of Defense, the Joint Staff, the armed services, and the combatant commands—and the White House. The document outlines security challenges the United States faces and is intended to guide the development of the Joint Force through 2020 and during a period of anticipated fiscal constraints. The Defense Strategic Guidance identified 10 primary missions of the armed forces, as well as several principles to guide the force and program development necessary to achieve these missions. For more information about the Defense Strategic Guidance and other selected strategy and planning documents, see appendix I. In July 2012, DOD submitted the Quadrennial Roles and Missions Review report, together with the Defense Strategic Guidance, to Congress to meet the statutory reporting requirement; however, DOD’s submission did not provide sufficiently detailed information about most of the statutorily required elements of the assessment. Although the statute does not require DOD to report on all elements of the roles and missions assessment, a key principle for information quality indicates that information presented to Congress should be clear and sufficiently detailed.armed services and some information about core capabilities, but did not, Specifically, we found that DOD provided the missions of the for any of the 10 missions, clearly identify the components within the department responsible for providing the core competencies and capabilities, or identify any plans to address any capability gaps or unnecessary duplication. The Quadrennial Roles and Missions Review report identifies missions of the armed services and provides information about capabilities and previously identified areas of duplication. The report restates the 10 missions of the armed forces identified in the Defense Strategic Guidance, and identifies some protected capabilities and investments needed to carry out each of the missions. For example, the report restates DOD’s mission to project power despite anti-access / area denial challenges. It then lists five key enhancements and protected capabilities associated with this mission: enhance electronic warfare, develop a new penetrating bomber, protect the F-35 Joint Strike Fighter program, sustain undersea dominance and enhance capabilities, and develop and enhance preferred munitions capabilities. Additionally, the report also mentioned some previously identified areas of duplication and actions that were subsequently taken, such as eliminating redundancy in intelligence organizations, or proceeding with previous plans to eliminate organizations that performed duplicative functions or outlived their original purpose: the report notes the consolidation of specialized intelligence offices across DOD into two Defense Intelligence Agency task forces focused on counterterrorism and terrorism finance. Finally, the report also provides specific information about Information Operations as well as detention and interrogation, both of which were required to be included in Prior to the submission to this Quadrennial Roles and Missions Review.Congress, senior DOD leadership—including the Deputy Assistant Secretary of Defense for Force Development, the DOD General Counsel, Assistant Secretary of Defense for Legislative Affairs, Under Secretary of Defense (Comptroller), Director of Cost Assessment and Program Evaluation, Director of the Joint Staff, Under Secretary of the Navy, Secretary of the Army, and Secretary of the Air Force—internally concurred that the submission met the statutory requirement according to a tracking sheet used by the Office of the Under Secretary of Defense for Policy. While the submission identifies core missions for the armed services and provides some information about capabilities and competencies needed for those missions, it does not provide sufficiently detailed information about other statutorily required elements of the roles and missions assessment. In our review of the report, we found that DOD did not, for any of the 10 missions, clearly identify the components within the department responsible for providing the core competencies and capabilities, or identify any plans to address any capability gaps or unnecessary duplication. For example: The submission does not provide clear and sufficiently detailed information on which component or components are responsible for enhancing electronic warfare capabilities, which is identified by DOD as one of the key capabilities needed to project power despite anti- access / area denial challenges. In our prior work, we have found that DOD needed to strengthen its management and oversight of electronic warfare programs and activities, reduce overlap, and improve its return on its multibillion-dollar acquisition investments. DOD has acknowledged that it faces ongoing challenges in determining whether the current level of investment is optimally matched with the existing capability gaps.does not provide sufficiently detailed information of its approach to assign responsibilities, close potential gaps, or eliminate unnecessary duplication. The submission also does not provide clear and sufficiently detailed information on which components are responsible for enhancing airborne intelligence, surveillance, and reconnaissance capabilities, which are required for counterterrorism and irregular warfare missions. In our prior work, we have found that ineffective acquisition practices and collaboration efforts in the DOD unmanned aircraft systems portfolio creates overlap and the potential for duplication among a number of current programs and systems. Similarly, we have noted that opportunities exist to avoid unnecessary redundancies and maximize the efficient use of intelligence, surveillance, and reconnaissance capabilities. However, DOD’s submission does not clarify responsibilities among the Air Force, Army, or Navy for developing these capabilities. This is the second time that DOD did not provide sufficiently detailed information to Congress following its roles and missions assessment. In the first Quadrennial Roles and Missions Review Report submitted to Congress in 2009, DOD identified the core missions of the department and identified the DOD Joint Capabilities Areas as the core competencies for the department. However, the report did not provide details for all elements required of the assessment. For example, the report did not provide core competencies and capabilities—including identifying responsible organizations—for each of the missions; instead the report provided some capability information for only specific focus areas within some of these missions. Despite the limited information contained in the 2009 Quadrennial Roles and Missions Review Report, the department used that first review to inform changes later made in DOD Directive 5100.01, which establishes functions of the department and its major components. However, as a result of not providing clear, sufficiently detailed, and relevant information in the most recent submission, DOD did not provide Congress comprehensive information about roles, responsibilities, and needed capabilities and competencies that Congress was seeking. DOD did not conduct a comprehensive process for the roles and missions assessment. Instead DOD limited its approach to leveraging the results of another review, conducted in 2011, that resulted in the January 2012 release of the Defense Strategic Guidance. However, this earlier review was not intended to assess all elements the statute required of the roles and missions review and, as a result, by relying on it DOD does not have the assurance that its resulting assessment was comprehensive. We recognize that there were some benefits to this approach, as the Defense Strategic Guidance did identify primary missions of the armed services, which were then provided as the core missions required for the Quadrennial Roles and Missions Review. In addition, the Defense Strategic Guidance provided several principles to guide the force and program development necessary to achieve these missions. The Defense Strategic Guidance also became the basis for completing the most recent Quadrennial Defense Review. However, neither DOD’s review for preparing the Defense Strategic Guidance nor the Quadrennial Roles and Missions Review itself clearly identified the components within the department that are responsible for providing the core competencies and capabilities needed to address each of the primary missions, or plans for addressing any capability gaps or unnecessary duplication. Further, by using such an approach for preparing the roles and missions assessment, DOD did not document and follow key principles for conducting an effective and comprehensive assessment.principles include (1) developing and documenting a planned approach, including the principles or assumptions that will inform the assessment, which addresses all statutory requirements; (2) involving key internal stakeholders; (3) identifying and seeking input from appropriate external stakeholders; and (4) establishing time frames with milestones for conducting the assessment and completing the report. Planned approach: DOD did not develop and document its planned approach, including the principles or assumptions used to inform and address all statutory requirements of the assessment. Specifically, it did not document in its approach how it was going to address the statutory requirements related to the identification of components responsible for providing the core competencies and capabilities, any gaps, or any unnecessary duplication. A documented, planned approach provides a framework for understanding the strategic direction and the assumptions used to identify, analyze, assess, and address the statutory requirements of the assessment. Internal stakeholder involvement: The involvement of key internal stakeholders was limited. As part of a comprehensive process, the involvement of key internal stakeholders helps ensure that the information obtained during the review is complete. According to officials from the armed services, the Joint Staff, and the Office of the Under Secretary of Defense for Policy, officials from those offices had a limited role in the development and review of the roles and missions assessment. For example, the Chairman of the Joint Chiefs of Staff did not conduct an independent assessment of the roles and missions assessment prior to the broader, department-wide assessment. According to officials from the Office of the Secretary of Defense and Joint Staff, this decision was made because the Joint Chiefs of Staff had provided substantial input to, and had endorsed, the recently completed Defense Strategic Guidance. According to Joint Staff officials, the Chairman had agreed with the approach proposed by the Under Secretary of Defense for Policy to rely on the review that resulted in the Defense Strategic Guidance as the primary basis for the Quadrennial Roles and Missions Review. The Joint Staff reviewed the submission prepared by the Office of the Under Secretary of Defense for Policy and the Chairman then cosigned the submission with the Secretary of Defense. The armed services had limited responsibility for participating in the preparation of the roles and missions submission, and were given a limited opportunity to review and provide comment on DOD’s draft submission before it was submitted to Congress. In addition, officials from the Office of the Director of Administration and Management—responsible for the biennial review of DOD agencies and field activities where additional efficiencies may be identified—told us they sought an opportunity to participate in the Quadrennial Roles and Missions Review process, but were not included in the review. According to an official from the Office of the Under Secretary of Defense for Policy, internal stakeholder involvement was incorporated from the prior, senior-level review that resulted in the Defense Strategic Guidance. However, the Office of the Director of Administration and Management was not involved in that prior review. By not considering ways to build more opportunity for stakeholder input, DOD was not well-positioned to obtain and incorporate input from across the armed services, agencies, offices, and commands within the department. Identification and involvement of appropriate external stakeholders: DOD had limited input from appropriate external stakeholders, such as Congress and federal agencies, with related national security goals. Input from Congress could have provided more specific guidance and direction for what it expected of the roles and missions assessment. According to DOD officials, they briefly discussed the assessment with some congressional staff early in the process. In addition, the 2012 Quadrennial Roles and Missions Review report did provide specific information about Information Operations as well as detention and interrogation, as requested by Congress. This information was collected in addition to information leveraged from the review for the Defense Strategic Guidance. However, DOD officials told us that they would benefit from additional clarification of Congress’s expectations when performing subsequent roles and missions assessments. For example, these officials noted that it would be helpful if Congress highlighted which specific roles and responsibilities areas were of concern so that more detailed information might be provided about these areas in the next report. According to a DOD official, the White House was involved with the review for the Defense Strategic Guidance, but consultation with interagency partners was limited and occurred late in the process. While other federal agency partners were not involved with the latest Quadrennial Roles and Missions Review assessment, the involvement of other federal agency partners—such as the Department of State, Department of Homeland Security, and Office of the Director of National Intelligence—provides an opportunity to enlist their ideas, expertise, and assistance related to strategic objectives that are not solely the responsibility of DOD—such as homeland security and homeland defense. In assessing the capabilities and competencies, but not obtaining input from appropriate external stakeholders, DOD did not have additional support and input for the assessment of its roles and missions, or input as to what these stakeholders expected as an outcome of the assessment. Time frames: DOD did not develop a schedule to gauge progress for conducting the assessment and completing the report. Developing a schedule with time frames is useful to keep the overall review on track to meet deadlines and to produce a final product. However, aside from tracking the final review of the report in tracking sheets used by the Office of the Under Secretary of Defense for Policy and Joint Staff, DOD did not have planning documents that outlined specific time frames with milestones associated with conducting the assessment— including time allotted for conducting the assessment itself, soliciting input from internal and external stakeholders, and drafting the report prior to circulation for final review. The lack of such a schedule may have been a contributing factor to the delay in DOD’s submission. The report was required to be submitted to the congressional defense committees no later than the date in which the President’s budget request for the next fiscal year was provided to Congress, which was February 13, 2012; however, the report was submitted on July 20, 2012. DOD’s approach for the latest Quadrennial Roles and Missions Review also differed from the department’s approach for preparing the 2009 Quadrennial Roles and Missions Review. For the 2009 effort, DOD developed and documented guidance in a “terms of reference” that included, among other things, a methodological approach, time frames with deliverables, and a list of offices within DOD responsible for conducting portions of the assessment. However, no similar document was developed for the 2012 roles and missions assessment. According to officials from the Office of the Under Secretary of Defense for Policy, the 2009 Quadrennial Roles and Missions Review occurred before DOD had to address the challenges of the current fiscal climate, and as a result there might have been more interest in conducting the review. In contrast, in preparing the 2012 roles and missions review, the officials told us that senior DOD leadership had recently considered these difficult issues in preparing the Defense Strategic Guidance, and so preferred to rely on those recent discussions rather than conduct a separate review. According to DOD officials, the primary reason that they did not perform a separate effort to examine roles and missions is that the statutory assessment and reporting requirements of the Quadrennial Roles and Missions Review are largely duplicative of the review conducted for the Defense Strategic Guidance, as well as other reviews and processes. DOD officials stated that identifying core missions as well as core competencies and capabilities is also mirrored in the requirements for the Quadrennial Defense Review. Additionally, according to DOD officials, the annual budget process is designed to identify and assign capabilities within each service’s budget request, eliminate capability and capacity gaps, and eliminate unnecessary duplication among DOD components. However, by not conducting a specific, comprehensive roles and missions assessment, DOD missed an opportunity to examine these issues through a broad, department-wide approach, rather than through processes established for other purposes. Strategic assessments of the roles, missions, and needed competencies and capabilities within DOD—whether conducted through the Quadrennial Roles and Missions Review or some other strategic-level, department-wide assessment—can be used to inform the department and strengthen congressional oversight. Given the complex security challenges and increased fiscal pressures that the department faces, such assessments are important to help the department prioritize human capital and other investment needs across the many components within the department. Without a comprehensive roles and missions assessment, documented in a sufficiently detailed report, DOD missed an opportunity to lay the groundwork for the Quadrennial Defense Review and other department-wide reviews, allocate responsibilities among the many components within DOD, prioritize key capabilities and competencies, inform the department’s investments and budget requests, identify any unnecessary duplication resulting in cost savings through increased efficiency and effectiveness, and aid congressional oversight. A comprehensive process that outlined a planned approach for addressing all statutory requirements of the roles and missions assessment; involved key internal stakeholders; offered an opportunity for key external stakeholders, such as Congress, to provide input regarding the department’s approach; and set clear time frames to gauge progress for the assessment, would have helped provide DOD with reasonable assurance that its resulting assessment of roles and missions was comprehensive and that DOD was positioned to provide such a sufficiently detailed report to Congress. To assist DOD in conducting any future comprehensive assessments of roles and missions that reflect appropriate statutory requirements, we recommend that the Secretary of Defense develop a comprehensive process that includes a planned approach, including the principles or assumptions used to inform the assessment, that addresses all statutory requirements; the involvement of key DOD stakeholders, such as the armed services, Joint Staff, and other officials within the department; an opportunity to identify and involve appropriate external stakeholders, to provide input to inform the assessment; and time frames with milestones for conducting the assessment and for reporting on its results. In written comments on a draft of this report, DOD partially concurred with the report’s recommendation to develop a comprehensive process to assist in conducting future assessments of roles and missions. DOD’s comments are summarized below and reprinted in appendix II. In its comments, DOD agreed that it is important to make strategy-driven decisions regarding its missions and associated competencies and capabilities, and to assign and clarify to its components their roles and responsibilities. DOD noted that, in the context of dynamic strategic and budgetary circumstances and increasing fiscal uncertainty, the department leveraged its strategic planning and annual budget processes, which resulted in the release of the 2012 Defense Strategic Guidance and associated mission, capability, and force structure priorities to inform and address the 2012 Quadrennial Roles and Missions Review. Specifically, DOD commented on the four recommended principles of a comprehensive process: Regarding a planned approach, the department stated that it determined that using other, ongoing strategic planning efforts to complete the roles and missions assessment met the review’s statutory requirement. As noted in the report, there were some benefits to DOD’s taking advantage of other processes. However, DOD did not document its approach for identifying the components within the department responsible for providing the core competencies and capabilities, or identify any capability gaps or unnecessary duplication. A documented, planned approach provides a framework for understanding the strategic direction and the assumptions used to identify, analyze, assess, and address the statutory requirements of the assessment. Regarding DOD stakeholders, the department stated that the processes it used did include the involvement of key DOD stakeholders, but acknowledged that formally documenting the process for obtaining stakeholder input would have clarified the role of the Chairman of the Joint Chiefs of Staff. Documenting the decision regarding the Chairman’s role would have provided some clarification; however, as noted in the report, it is also important to obtain and document input from all key internal stakeholders—including the armed services, agencies, offices, and commands within the department. Regarding external stakeholders, the department stated that it did seek limited additional clarification from Congress prior to conducting the roles and missions assessment, but did not seek formal input to the assessment from other federal agencies because it relied on the external stakeholder input obtained during the development of the Defense Strategic Guidance. However, during the course of our review, a DOD official told us there was limited involvement from other federal agency partners during the review for the Defense Strategic Guidance. As noted in the report, not obtaining input from appropriate external stakeholders—such as the Department of State, Department of Homeland Security, and Office of the Director of National Intelligence—when assessing the capabilities and competencies hindered DOD from having the additional support for the assessment of its roles and missions. Regarding time frames and milestones, the department stated that the development of time frames just for the roles and missions assessment would have been largely duplicative of existing time frames for other efforts, including the development of the Defense Strategic Guidance and the annual budget process. However, developing a schedule with time frames would have been useful to keep the roles and missions assessment on track and aid the department in submitting its report by the statutory deadline. Developing a comprehensive process for its roles and missions assessment—a process that outlined the department’s planned approach for addressing all statutory requirements, involved key internal stakeholders, offered an opportunity for Congress and other key external stakeholders to provide input, and set clear time frames to gauge progress for the assessment—would have helped provide DOD with reasonable assurance that its resulting assessment was comprehensive. The department’s approach resulted in a report that was insufficiently detailed, therefore, we continue to believe the recommendation is valid to guide future roles and missions reviews. We are sending copies of this report to the appropriate congressional committees; the Secretary of Defense; the Under Secretary of Defense for Policy; the Chairman of the Joint Chiefs of Staff; the Secretaries of the Army, of the Navy, and of the Air Force; the Commandant of the Marine Corps; DOD’s Director of Administration and Management; and the Director of the Office of Management and Budget. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3489 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. The Department of Defense (DOD) is required to regularly assess and report on its roles and missions in the Quadrennial Roles and Missions Review. The most recently completed Quadrennial Roles and Missions Review occurred amid a series of strategy and policy reviews that DOD has undertaken over the past 6 years, including the first Quadrennial Figure 1 provides a Roles and Missions Review conducted in 2009.timeline of the issuance of select DOD strategic-level reports and other documents that contain roles and missions-related information. The National Defense Strategy provides the foundation and strategic framework for much of the department’s strategic guidance. Specifically, it addresses how the military services plan to fight and win America’s wars and describes how DOD plans to support the objectives outlined in the President’s National Security Strategy. It also provides a framework for other DOD strategic guidance related to deliberate planning, force development, and intelligence. Further, the National Defense Strategy informs the National Military Strategy and describes plans to support the objectives outlined in the President’s National Security Strategy. By law, DOD is required to conduct the Quadrennial Defense Review every 4 years to determine and express the nation’s defense strategy and establish a defense program for the next 20 years. The review is to comprise a comprehensive examination of the national defense strategy, force structure, force modernization plans, infrastructure, budget planning, and other elements of the defense program and policies of the United States. The Quadrennial Defense Review also includes an evaluation by the Secretary of Defense and the Chairman of the Joint Chiefs of Staff of the military’s ability to successfully execute its missions. The latest Quadrennial Defense Review was issued in March 2014.addition to these strategic reviews conducted at DOD, both the Department of Homeland Security and the Department of State released strategic reviews that provide a strategic framework to guide the activities In to secure the homeland and to provide a blueprint for diplomatic and development efforts. The Ballistic Missile Defense Review, released in February 2010, is a review conducted pursuant to guidance from the President and the Secretary of Defense, while also addressing the statutory requirement to assess U.S. ballistic missile defense policy and strategy. This review evaluated the threats posed by ballistic missiles and developed a missile defense posture to address current and future challenges. Specifically, this review sought to align U.S. missile defense posture with near-term regional missile threats and sustain the ability to defend the homeland against limited long-range missile attack. The Nuclear Posture Review is a statutorily mandated review that establishes U.S. nuclear policy, strategy, capabilities and force posture for the next 5 to 10 years.April 2010 and provided a roadmap for implementing the President’s policy for reducing nuclear risks to the United States and the international community. Specifically, the 2010 report identified long-term modernization goals and requirements, including sustaining a safe, secure, and effective nuclear arsenal through the life extension of existing nuclear weapons; increasing investments to rebuild and modernize the nation’s nuclear infrastructure; and strengthening the science, technology, and engineering base. The latest review was released by DOD in The National Security Strategy describes and discusses the worldwide interests, goals, and objectives of the United States that are vital to its national security and calls for a range of actions to implement the strategy.President in May 2010, addressed, among other things, how the United States would strengthen its global leadership position; disrupt, dismantle, and defeat al Qaeda; and achieve economic recovery at home and abroad. This strategy also emphasized the need for a whole-of- government approach with interagency engagement to ensure the security of the American people and the protection of American interests. The National Security Strategy is to be used to inform the National Defense Strategy and the National Military Strategy. The most recent National Security Strategy, released by the DOD Directive 5100.01 established the functions of the department and its major components. DOD reissued the directive in 2010 after the first Quadrennial Roles and Missions Review included what DOD describes as a thorough review of the directive. DOD updated the prior directive to incorporate emerging responsibilities in areas such as special operations and cyberspace operations and reflect other changes in the department’s organization over the preceding decade. The Space Posture Review is a statutorily mandated review of U.S. national security space policy and objectives, conducted jointly by the Through Secretary of Defense and the Director of National Intelligence.coordination with the Office of the Director of National Intelligence, DOD released the National Security Space Strategy in January 2011. The strategy is derived from principles and goals found in the National Space Policy and builds on the strategic approach laid out in the National Security Strategy. Specifically, the strategy’s stated objectives for national space security include strengthening safety, stability, and security in space; maintaining and enhancing the strategic national security advantages afforded to the United States by space; and engaging the space industrial base that supports U.S. national security. National Military Strategy and the Joint Strategic Capabilities Plan The National Military Strategy and the Joint Strategic Capabilities Plan, along with other strategic documents, provide DOD with guidance and instruction on military policy, strategy, plans, forces and resource requirements and allocations essential to successful execution of the National Security Strategy and other Presidential Directives. Specifically, the National Military Strategy, last issued in 2011, provides focus for military activities by defining a set of interrelated military objectives from which the service chiefs and combatant commanders identify desired capabilities and against which the Chairman of the Joint Chiefs of Staff assesses risk. This strategy defines the national military objectives, describes how to accomplish these objectives, and addresses the military capabilities required to execute the strategy. The Secretary of Defense’s National Defense Strategy informs the National Military Strategy, which is developed by the Chairman of the Joint Chiefs of Staff. In addition, the Joint Strategic Capabilities Plan is to provide guidance to the combatant commanders, the chiefs of the military services, and other DOD agencies to accomplish tasks and missions based on current capabilities. It also is to serve as the link between other strategic guidance and the joint operation planning activities. Biennial Review of DOD Agencies and Field Activities By law, DOD is required to conduct a review every 2 years of the services and supplies that each DOD agency and field activity provides.Office of the Director of Administration and Management in the Office of the Secretary of Defense has led this biennial review. The goals are to determine whether DOD needs each of these agencies and activities, or whether it is more effective, economical, or efficient for the armed services to assume the responsibilities. However, unlike the Quadrennial Roles and Missions Review, which assesses the roles of all DOD components, the biennial review focuses on DOD agencies and field activities. The Secretary of Defense recently directed that the biennial review should also include an assessment of the offices within the Office The of the Secretary of Defense. DOD issued the latest report on this biennial review in April 2013. The Unified Command Plan provides guidance to combatant commanders and establishes their missions, responsibilities, force structure, geographic area of responsibility, and other attributes. Section 161 of Title 10 of the U.S. Code tasks the Chairman of the Joint Chiefs of Staff to conduct a review of the plan not less often than every 2 years and submit recommended changes to the President through the Secretary of Defense. The Unified Command Plan was last updated in 2011. Sustaining U.S. Global Leadership: Priorities for 21st Century Defense The Sustaining U.S. Global Leadership: Priorities for 21st Century Defense report (also referred to as the Defense Strategic Guidance), released in January 2012, was directed by the President to identify the strategic interests of the United States. The document states that it was an assessment of the defense strategy prompted by the changing geopolitical environment and fiscal pressures. The Defense Strategic Guidance was developed by senior officials from DOD—including the Office of the Secretary of Defense, the Joint Staff, the armed services, and the combatant commands—and the White House. The document outlines security challenges the United States faces and is intended to guide the development of the Joint Force through 2020 and during a period of anticipated fiscal constraints. The Defense Strategic Guidance identified 10 primary missions of the armed forces: counter terrorism and irregular warfare; deter and defeat aggression; project power despite anti-access / area denial challenges;counter weapons of mass destruction; operate effectively in cyberspace and space; maintain a safe, secure, and effective nuclear deterrent; defend the Homeland and provide support to civil authorities; provide a stabilizing presence; conduct stability and counterinsurgency operations; and conduct humanitarian, disaster relief, and other operations. It also identified several principles to guide the force and program development necessary to achieve these missions. For example, it noted the need for the department to continue to reduce costs through reducing the rate of growth of manpower costs, and the identification of additional efficiencies. In March 2013, the Secretary of Defense directed the completion of a Strategic Choices Management Review. The Strategic Choices Management Review was to examine the potential effect of additional, anticipated budget reductions on the department and develop options for performing the missions in the Defense Strategic Guidance. Specifically, the review was to inform how the department would allocate resources when executing its fiscal year 2014 budget and preparing its fiscal year 2015 through fiscal year 2019 budget plans. According to the Secretary of Defense, the purpose of the Strategic Choices Management Review was to understand the effect of further budget reductions on the department and develop options to deal with these additional reductions. The Secretary of Defense further emphasized that producing a detailed budget blueprint was not the purpose of this review. In addition to the contact named above, key contributors to this report were Margaret Morgan and Kevin L. O’Neill, Assistant Directors; Tracy Abdo; Darreisha M. Bates; Elizabeth Curda; Leia Dickerson; Gina Flacco; Brent Helt; Mae Jones; Amie Lesser; Travis Masters; Judy McCloskey; Terry Richardson; and Sabrina Streagle.
DOD is one of the largest organizations in the world, with its budget representing over half of the U.S. federal government's discretionary spending. According to DOD, the global security environment presents an increasingly complex set of challenges. Congress requires DOD to assess and report on its roles and missions every 4 years. In July 2012, DOD submitted its most recent Quadrennial Roles and Missions Review report. In June 2013, GAO was mandated to review DOD's process for conducting the latest Quadrennial Roles and Missions Review. GAO evaluated the extent to which DOD developed a sufficiently detailed report and conducted a comprehensive process for assessing roles and missions. GAO compared DOD's July 2012 report with the statutory requirements for the assessment, and compared DOD's assessment process with key principles derived from a broad selection of principles GAO and other federal agencies have identified. The Department of Defense's (DOD) July 2012 submission to Congress following its most recent Quadrennial Roles and Missions Review did not provide sufficiently detailed information about most of the statutorily required elements of the assessment. Specifically, DOD's July 2012 submission included the results of a 2011 review that led to the January 2012 release of a new strategic guidance document (hereinafter referred to as the Defense Strategic Guidance) as well as the Quadrennial Roles and Missions Review report. Although DOD is not statutorily required to report on all elements of the assessment, the submission that it provided to Congress was lacking key information. A key principle for information quality indicates that information presented to Congress should be clear and sufficiently detailed; however, neither the Defense Strategic Guidance nor the Quadrennial Roles and Missions Review included sufficiently detailed information about certain key elements of the roles and missions assessment. For example, while the submitted documents identify the core missions of the armed services and provide some information on capabilities associated with these missions, neither document provides other information required by the roles and missions assessment—including identifying the DOD components responsible for providing the identified core competencies and capabilities and identifying plans for addressing any unnecessary duplication or capability gaps. DOD's process for assessing roles and missions missed key principles associated with effective and comprehensive assessments. Specifically, DOD limited its process to leveraging the prior review that resulted in the Defense Strategic Guidance; by doing so its process did not include the following: A planned approach : DOD did not develop or document a planned approach that included the principles or assumptions used to inform the assessment. Internal stakeholder involvement: DOD included limited internal stakeholder involvement. For example, DOD gave the armed services a limited opportunity to review the draft prior to its release. Identification and involvement of external stakeholders : DOD obtained limited input from relevant external stakeholders, such as Congress, on the specific guidance and direction they expected of the roles and missions assessment. Time frames : DOD did not develop a schedule to gauge progress for conducting the assessment and completing the report, which may have contributed to the report being provided to Congress over 5 months late. DOD officials stated that the primary reason that they did not perform a separate roles and missions review is that the statutory requirements were duplicative of other reviews and processes, such as the Defense Strategic Guidance. However, by not conducting a comprehensive assessment, DOD missed an opportunity to conduct a department-wide examination of roles and missions. Instead, by relying on processes established for other purposes, DOD has limited assurance that it has fully identified all possible cost savings that can be achieved through the elimination of unnecessary duplication and that it has positioned itself to report clear and sufficient information about the statutorily required assessment to Congress. GAO recommends that, in conducting future assessments of roles and missions, DOD develop a comprehensive process that includes a planned approach, involvement of key internal and external stakeholder involvement, and time frames. DOD partially concurred, stating that it had leveraged other processes. GAO maintains that the roles and missions report was insufficiently detailed and continues to believe the recommendation is valid, as discussed in the report.
Congress granted OPM the authority to conduct personnel demonstration projects under the Civil Service Reform Act of 1978 to test new personnel and pay systems. A federal agency is to obtain the authority from OPM to waive existing laws and regulations in Title 5 to propose, develop, test, and evaluate alternative approaches to managing its human capital. Under the demonstration project authority, no waivers of law are to be permitted in areas of employee leave, employee benefits, equal employment opportunity, political activity, merit system principles, or prohibited personnel practices. The law also contains certain limitations and requirements, including 5-year time limit for duration of projects, 5,000 employee cap on participation, restriction to 10 concurrent demonstration projects governmentwide, union and employee consultation, published formal project plan in the Federal Register, notification of Congress and employees of the demonstration project, project evaluations. OPM guidance requires that agencies conduct at least three evaluations— after implementation, after at least 3 and a half years, and after the original scheduled end of the project—that are to address the following questions: Did the project accomplish the intended purpose and goals? If not, why not? Was the project implemented and operated appropriately and accurately? What were the costs, relative to the benefits of the project? What was the impact on veterans and other equal employment opportunity groups? Were merit systems principles adhered to and prohibited personnel practices avoided? Can the project or portions thereof be generalized to other agencies or governmentwide? The demonstration projects can link some or all of the funding sources for pay increases available under the current federal compensation system, the General Schedule (GS), to an employee’s level of performance. Table 1 defines selected funding sources. High-performing organizations seek to create pay, incentive, and reward systems based on valid, reliable, and transparent performance management systems with adequate safeguards and link employee knowledge, skills, and contributions to organizational results. To that end, we found that the demonstration projects took a variety of approaches to designing and implementing their pay for performance systems to meet the unique needs of their cultures and organizational structures. Specifically, the demonstration projects took different approaches to using competencies to evaluate employee performance, translating employee performance ratings into pay increases and considering current salary in making performance-based pay decisions, managing costs of the pay for performance system, and providing information to employees about the results of performance appraisal and pay decisions. High-performing organizations use validated core competencies as a key part of evaluating individual contributions to organizational results. Competencies define the skills and supporting behaviors that individuals are expected to demonstrate and can provide a fuller picture of an individual’s performance. To this end, we found that the demonstration projects took different approaches to evaluating employee performance. AcqDemo and NRL use core competencies for all positions across the organization. Other demonstration projects, such as NIST, DOC, and China Lake, use competencies based primarily on the individual employee’s position. Applying competencies organizationwide. Core competencies applied organizationwide can help reinforce employee behaviors and actions that support the organization’s mission, goals, and values and can provide a consistent message to employees about how they are expected to achieve results. AcqDemo evaluates employee performance against one set of “factors,” which are applied to all employees. “Discriminators” and “descriptors” further define the factors by career path and pay band. According to AcqDemo, taken together, the factors, discriminators, and descriptors are relevant to the success of a DOD acquisition organization. AcqDemo’s six factors are (1) problem solving, (2) teamwork/cooperation, (3) customer relations, (4) leadership/supervision, (5) communication, and (6) resource management. Discriminators further define each factor. For example, discriminators for problem solving include scope of responsibility, creativity, complexity, and independence. Descriptors identify contributions by pay band. For example, a descriptor for problem solving at one pay band level is “resolves routine problems within established guidelines,” and at a higher level, a descriptor is “anticipates problems, develops sound solutions and action plans to ensure program/mission accomplishment.” All factors must be used and cannot be supplemented. While the pay pool manager may weight the factors, according to an official, no organization within AcqDemo has weighted the factors to date. Managers are authorized to use weights sparingly because contributions in all six factors are important to ensuring AcqDemo’s overall success as well as to developing the skills of the acquisition workforce. If weights are used, they are to be applied uniformly across all positions within the pay pool. The six factors are initially weighted equally and no factor can be weighted less than one-half of its initial weight. Employees are to be advised of the weights at the beginning of the rating period. While AcqDemo applies organizationwide competencies across all employees, NRL has established “critical elements” for each career path and allows supervisors to add individual performance expectations. The critical elements are the key aspects of work that supervisors are to consider in evaluating employee performance. Each critical element has discriminators and descriptors. Specifically, for the Science and Engineering Professionals career path, one critical element is “scientific and technical problem solving.” That element’s discriminators are (1) level of oversight, (2) creativity, (3) technical communications, and (4) recognition. For recognition, the descriptors include “recognized within own organization for technical ability in assigned areas” as one level of contribution and “recognized internally and externally by peers for technical expertise” as the next level of contribution. NRL’s system allows supervisors to supplement the descriptors to further describe what is expected of employees. According to an NRL demonstration project official, this flexibility allows the supervisor to better communicate performance expectations. Further, pay pool panels may weight the critical elements, including a weight of zero. Weighted elements are to be applied consistently to groups within a career path, such as Bench Level, Supervisor, Program Manager, or Support for the Science and Engineering Professionals career path. According to an NRL official, panels commonly weight critical elements but rarely weight an element to zero. Further, panels use weighting most often for the Science and Engineering Professionals career path. Determining individual position-based competencies. Other demonstration projects determine competencies based primarily on the individual position. NIST and DOC identify “critical elements” tailored to each individual position. According to a DOC demonstration project official, DOC tailors critical elements to individual positions because their duties and responsibilities vary greatly within the demonstration project. Each employee’s performance plan is to have a minimum of two and a maximum of six critical elements along with the major activities to accomplish the element. Supervisors are to assign a weight to each critical element on the basis of its importance, the time required to accomplish it, or both. According to NIST and DOC officials, weighting is done at the supervisory level and is not tracked at the organizational level. To evaluate the accomplishment of critical elements, DOC uses its organizationwide Benchmark Performance Standards. They range from the highest standard of performance, “objectives were achieved with maximum impact, through exemplary work that demonstrated exceptional originality, versatility, and creativity” to the lowest, “objectives and activities were not successfully completed, because of failures in quality, quantity, completeness, or timelines of work.” Supervisors can develop supplemental performance standards as needed. Similarly, each China Lake employee has a performance plan that includes criteria tailored to individual responsibilities. The criteria are to be consistent with the employee’s work unit’s goals and objectives and can be set in two ways, depending on the nature of the position. The “task approach” defines an individual’s output. The “function approach” defines the required skills and how well they are to be performed. Employees and supervisors choose from a menu of skills, such as planning, analysis, coordination, and reporting/documentation. A China Lake official stated that some of its work units require core competencies, such as teamwork and self-development, for all employees. According to the official, while developing core competencies sends a message about what is important to the organization, tailoring individual performance plans can focus employees’ attention on changing expectations. High-performing organizations seek to create pay, incentive, and reward systems that clearly link employee knowledge, skills, and contributions to organizational results. These organizations make meaningful distinctions between acceptable and outstanding performance of individuals and appropriately reward those who perform at the highest level. Performance management systems in these leading organizations typically seek to achieve three key objectives: (1) provide candid and constructive feedback to help individual employees maximize their potential in understanding and realizing the goals and objectives of the agency, (2) provide management with the objective and fact-based information it needs to reward top performers, and (3) provide the necessary information and documentation to deal with poor performers. To this end, the demonstration projects took different approaches in translating individual employee performance ratings into permanent pay increases, one-time awards, or both in their pay for performance systems. Some projects, such as China Lake and NAVSEA’s Newport division, established predetermined pay increases, awards, or both depending on a given performance rating. Others, such as DOC and NIST, delegated the flexibility to individual pay pools to determine how ratings translate into pay increases, awards, or both. Overall, while the demonstration projects made some distinctions among employees’ performance, the data and experience to date show that making such meaningful distinctions remains a work in progress. Setting predetermined pay increases and awards. China Lake’s assessment categories translate directly to a predetermined range of permanent pay increases, as shown in figure 1. Supervisors are to rate employees in one of three assessment categories and recommend numerical ratings, based on employees’ performance and salaries, among other factors. For employees receiving “highly successful” ratings, a Performance Review Board assigns the numerical ratings. For “less than fully successful” ratings, the first-line supervisor and a second-level reviewer assign the numerical ratings, based on a problem-solving team’s findings and a personnel advisor’s input. The numerical rating determines how many “increments” the employee will receive. An increment is a permanent pay increase of about 1.5 percent of an employee’s base salary. China Lake made some distinctions in performance across employees’ ratings, as shown in figure 2: 11.3 percent of employees received a “1,” the highest numerical rating, a total of six employees (0.2 percent) were rated “less than fully successful” and received numerical ratings of “4” or “5.” At China Lake, the average pay increase rose with performance, as shown in table 2. The average permanent pay increase ranged from 1.8 to 5.3 percent. Six employees were rated as “less than fully successful” and thus were to receive no performance pay increases and half or none of the GPI. According to a China Lake official, employees rated as “less than fully successful” are referred to a problem-solving team, consisting of the supervisor, reviewer, personnel advisor, and other appropriate officials, that determines what corrective actions are necessary. Similar to China Lake, at NAVSEA’s Newport division, a performance rating category translates directly to a predetermined range of permanent pay increases, one-time awards, or both, as shown in figure 3. Newport translates ratings into pay increases and awards in three steps. First, supervisors are to rate employees as “acceptable” or “unacceptable.” Employees rated as unacceptable are not eligible for pay increases or awards. Employees rated as acceptable are to be further assessed on their performance relative to their salaries. Supervisors assess acceptable employees into three rating categories: contributors, major contributors, or exceptional contributors. Supervisors also make recommendations for the number of pay points to be awarded, from 0 to 4, depending on the rating category and the employees’ salaries. Pay pool managers review and department heads finalize supervisor recommendations. A pay point equals 1.5 percent of the midpoint salary of the pay band. Pay points may be permanent pay increases or one-time awards. Newport allows for some flexibility in deciding whether employees receive permanent pay increases, one-time awards, or both. Newport’s guidelines state that those who make greater contributions should receive permanent increases to base pay, while employees whose contributions are commensurate with their salaries receive one-time awards. In addition, employees whose salaries fall below the midpoint of the pay band are more likely to receive permanent pay increases, while employees above the midpoint of the pay band are more likely to receive one-time awards. NAVSEA’s Newport division made some distinctions in performance across employees’ ratings. As shown in figure 4, about 80 percent of employees were rated in the top two categories (exceptional contributor and major contributor) and no employees were rated unacceptable. In addition, at NAVSEA’s Newport division, the average pay increase and award amount rose with performance, as shown in table 3. The average permanent pay increase ranged from 1.6 to 2.9 percent. The average performance award ranged from $1,089 to $2,216. Delegating pay decisions to pay pools. Some demonstration projects, such as NIST and DOC, delegate the flexibility to individual pay pools to determine how ratings translate into permanent pay increases and one-time awards. For example, supervisors are to evaluate employees on a range of performance elements on a scale of 0 to 100. Employees with scores less than 40 are to be rated as “unsatisfactory” and are not eligible to receive performance pay increases, awards, the GPI, or the locality pay adjustment. Employees with scores over 40 are to be rated as “eligible;” receive the full GPI and locality pay adjustment; and be eligible for a performance pay increase, award, or both. Pay pool managers have the flexibility to determine the amount of the pay increase, award, or both for each performance score, depending on where they fall within the pay band. Employees lower in the pay band are eligible for larger pay increases as a percentage of base pay than employees higher in the pay band, and employees whose salaries are at the top of the pay band and who therefore can no longer receive permanent salary increases may receive awards. According to our analysis, in its 2002 rating cycle, DOC made few distinctions in performance in its distribution of ratings. As shown in figure 5, 100 percent of employees scored 40 or above and over 86 percent of employees scored 80 or above and no employees were rated as unsatisfactory. According to a DOC official, a goal of the demonstration project is to address poor performance early. An official also noted that poor performers may choose to leave the organization before they receive ratings of unsatisfactory or are placed on a performance improvement plan. Employees who are placed on a performance improvement plan and improve their performance within the specified time frame (typically less than 90 days) are determined to be eligible for the GPI and locality pay adjustment for the remainder of the year. Our analysis also shows that DOC made few distinctions in performance in its distribution of awards. As shown in table 4, 10 employees who scored from 60 to 69 received an average performance award of $925, while employees who scored from 70 to 79 received an average of $742. Our analysis suggests that DOC’s policy of delegating flexibility to individual pay pools to determine performance awards could explain why, without an independent reasonableness review, some employees with lower scores receive larger awards than employees with higher scores. According to DOC, it reviews pay pool decisions within but not across organizational units. NIST also delegates pay decisions to individual pay pools. The NIST 100- point rating system is similar to DOC’s system. Employees with scores under 40 are rated as “unsatisfactory” and do not receive the GPI, locality pay adjustment, or performance pay increases or awards. Employees with scores over 40 receive the full GPI and locality pay adjustment and are eligible to receive performance pay increases, awards, or both. Similar to DOC, in its 2002 rating cycle, NIST made few distinctions in performance in its distribution of ratings. Specifically, 99.9 percent of employees scored 40 or above, and nearly 78 percent of employees scored 80 or above, and 0.1 percent, or 3 employees, were rated as unsatisfactory. Several of the demonstration projects consider an employee’s current salary when making decisions on permanent pay increases and one-time awards. By considering salary in such decisions, the projects intend to make a better match between an employee’s compensation and his or her contribution to the organization. Thus, two employees with comparable contributions could receive different pay increases and awards depending on their current salaries. At AcqDemo, supervisors recommend and pay pool managers approve employees’ “contribution scores.” Pay pools then plot contribution scores against the employees’ current salaries and a “standard pay line” to determine if employees are “appropriately compensated,” “under- compensated,” or “over-compensated,” given their contributions. Figure 6 shows how AcqDemo makes its performance pay decisions for employees who receive the same contribution scores but earn different salaries. AcqDemo has reported that it has made progress in matching employees’ compensation to their contributions to the organization. From 1999 to 2002, appropriately compensated employees increased from about 63 percent to about 72 percent, under-compensated employees decreased from about 30 percent to about 27 percent, and over-compensated employees decreased from nearly 7 percent to less than 2 percent. NRL implemented a similar system intended to better match employee contributions with salary. Data from NRL show that it has made progress in matching employees’ compensation to their contributions to the organization. From 1999 to 2002, “normally compensated” employees, or employees whose contributions match their compensation, increased from about 68 percent to about 81 percent; under-compensated employees decreased from about 25 percent to about 16 percent; and over- compensated employees decreased from about 7 percent to about 3 percent. Similar to AcqDemo’s and NRL’s approach, NAVSEA’s Dahlgren division recently redesigned its pay for performance system to better match compensation and contribution. Because Dahlgren implemented its new system in 2002, performance data were not available. Less systematically, China Lake and NAVSEA’s Newport division consider current salary in making pay and award decisions. For example, at Newport, supervisors within each pay pool are to list all employees in each pay band by salary before a rating is determined and then evaluate each employee’s contribution to the organization considering that salary. If their contributions exceed expectations, employees are considered for permanent pay increases. If contributions meet expectations, employees are considered for one-time awards. OPM reports that the increased costs of implementing alternative personnel systems should be acknowledged and budgeted for up front. Based on the data the demonstration projects provided us, direct costs associated with salaries, training, and automation and data systems were the major cost drivers of implementing their pay for performance systems. The demonstration projects reported other direct costs, such as evaluations and administrative expenses. The demonstration projects used a number of approaches to manage the direct costs of implementing and maintaining their pay for performance systems. Under the current GS system, federal employees annually receive the GPI and, where appropriate, a locality pay adjustment, as well as periodically receiving WGIs. The demonstration projects use these and other funding sources under the GS to make their pay decisions, as shown in figure 7. The aggregated average salary data that some of the demonstration projects were able to provide do not allow us to determine whether total salary costs for the demonstration projects are higher or lower than their GS comparison groups. However, our analysis shows that the demonstration projects’ cumulative percentage increases in average salaries varied in contrast to their GS comparison groups. For example, as shown in table 5, after the first year of each demonstration project’s implementation, the differences in cumulative percentage increase in average salary between the demonstration project employees and their GS comparison group ranged from –2.9 to 2.7 percentage points. The demonstration projects used several approaches to manage salary costs, including (1) choosing the method of converting employees into the demonstration project, (2) considering fiscal conditions and the labor market, (3) managing movement through the pay band, and (4) providing a mix of awards and performance pay increases. Choosing the method of converting employees into the demonstration project. When the demonstration projects converted employees from the GS system to the pay for performance system, they compensated each employee for the portion of the WGI that the employee had earned either as a permanent increase to base pay or a one-time lump sum payment. Four of the six demonstration projects (China Lake, NRL, NAVSEA, and AcqDemo) gave employees permanent increases to base pay, while the remaining two demonstration projects (NIST and DOC) gave employees one-time lump sum payments. Both methods of compensating employees have benefits and drawbacks, according to demonstration project officials. Giving permanent pay increases at the point of conversion into the demonstration project recognizes that employees had already earned a portion of the WGI, but a drawback is that the salary increases are compounded over time, which increases the organization’s total salary costs. However, the officials said that giving permanent pay increases garnered employees’ support for the demonstration project because employees did not feel like they would have been better off under the GS system. Considering fiscal conditions and the labor market. In determining how much to budget for pay increases, demonstration projects considered the fiscal condition of the organization as well as the labor market. For example, China Lake, NIST, NRL, and NAVSEA receive a portion of their funding from a working capital fund and thus must take into account fiscal conditions when budgeting for pay increases and awards. These organizations rely, in part, on sales revenue rather than direct appropriations to finance their operations. The organizations establish prices for their services that allow them to recover their costs from their customers. If the organizations’ services become too expensive (i.e., salaries are too high), they become less competitive with the private sector. A demonstration project official at NAVSEA’s Newport division said that as an organization financed in part through a working capital fund, it has an advantage over organizations that rely completely on appropriations because it can justify adjusting pay increase and awards budgets when necessary to remain competitive with the private sector. Newport has had to make such adjustments. In fiscal year 2002, the performance pay increase and award pools were funded at lower levels (1.4 percent and 1.7 percent of total salaries for pay increases and awards, respectively) than in 2001 (1.7 percent and 1.8 percent, respectively) because of fiscal constraints. As agreed with one of its unions, Newport must set aside a minimum of 1.4 percent of salaries for its pay increases, which is equal to historical spending under GS for similar increases. NAVSEA’s Newport division also considers the labor market and uses regional and industry salary information compiled by the American Association of Engineering Societies when determining how much to set aside for pay increases and awards. In fiscal year 2001, Newport funded pay increases and awards at a higher level (1.7 percent and 1.8 percent of total salaries, respectively) than in fiscal year 2000 (1.4 percent and 1.6 percent, respectively) in response to higher external engineer, scientist, and information technology personnel salaries. Managing movement through the pay band. Because movement through the pay band is based on performance, demonstration project employees could progress through the pay band more quickly than under the GS. Some demonstration projects have developed ways intended to manage this progression to prevent all employees from eventually migrating to the top of the pay band and thus increasing salary costs. NIST and DOC manage movement through the pay band by recognizing performance with larger pay increases early in the pay band and career path and smaller increases higher in the pay band and career path. Both of these demonstration projects divided each pay band into five intervals. The intervals determine the maximum percentage increase employees could receive for permanent pay increases. The intervals, shown in figure 8, have helped NIST manage salary costs, according to a NIST official. Similarly, some of the demonstration projects, including China Lake and NAVSEA’s Dahlgren division, have checkpoints or “speed bumps” in their pay bands intended to manage salary costs as well as ensure that employees’ performance coincides with their salaries as they progress through the band. These projects established checkpoints designed to ensure that only the highest performers move into the upper half of the pay band. For example, when employees’ salaries at China Lake reach the midpoint of the pay band, they must receive ratings of highly successful, which are equivalent to exceeding expectations, before they can receive additional salary increases. A Performance Review Board, made up of senior management, is to review all highly successful ratings. Providing a mix of awards and pay increases. Some of the demonstration projects intended to manage costs by providing a mix of one-time awards and permanent pay increases. Rewarding an employee’s performance with an award instead of an equivalent increase to base pay can reduce salary costs in the long run because the agency only has to pay the amount of the award one time, rather than annually. For example, at NAVSEA’s Newport division, as employees move higher into the pay band, they are more likely to receive awards than permanent increases to base pay. According to a Newport official, expectations increase along with salaries and thus it is more likely that their contributions would meet, rather than exceed, expectations. To manage costs, China Lake allows pay pools to transfer some of their budgets for permanent pay increases to their budgets for awards. A China Lake official said that because China Lake receives a portion of its funding from a working capital fund, it is not only important to give permanent salary increases to high-performing employees, but also to give increases China Lake can afford the next year. China Lake does not track how much funding is transferred from performance pay increase budgets to awards budgets. We have reported that agencies will need to invest resources, including time and money, to ensure that employees have the information, skills, and competencies they need to work effectively in a rapidly changing and complex environment. This includes investments in training and developing employees as part of an agency’s overall effort to achieve cost- effective and timely results. Agency managers and supervisors are often aware that investments in training and development initiatives can be quite large. However, across the federal government, evaluation efforts have often been hindered by the lack of accurate and reliable data to document the total costs of training efforts. Each of the demonstration projects trained employees on the performance management system prior to implementation to make employees aware of the new approach, as well as periodically after implementation to refresh employee familiarity with the system. The training was designed to help employees understand competencies and performance standards; develop performance plans; write self-appraisals; become familiar with how performance is evaluated and how pay increases and awards decisions are made; and know the roles and responsibilities of managers, supervisors, and employees in the appraisal and payout processes. Generally, demonstration projects told us they incurred direct and indirect costs associated with training. Direct training costs that the demonstration projects reported included costs for contractors, materials, and travel related to developing and delivering training to employees and managers. As shown in table 6, total direct costs that the demonstration projects reported for training through the first 5 years of the projects’ implementation range from an estimated $33,000 at NAVSEA’s Dahlgren division to more than $1 million at China Lake. (NIST reported no direct costs associated with training.) Training costs, as indicated by the cost per employee, were generally higher in the year prior to implementation, except for AcqDemo’s, which increased over time. While the demonstration projects did not report indirect costs associated with training employees on the demonstration project, officials stated that indirect costs, such as employee time spent developing, delivering, or attending training, could nonetheless be significant. Likewise, the time spent on the “learning curve” until employees are proficient with the new system could also be significant. For example, although NIST did not capture its indirect training costs, agency officials told us that prior to implementation, each NIST employee was in training for 1 day. Since its implementation, NIST offers optional one-half day training three times a year for all employees. AcqDemo offered 8 hours of training for employees prior to implementation and a minimum of 4 hours of training after implementation. All potential new participants also received eight hours of training prior to implementation at their site. Supervisors and human resources professionals at AcqDemo were offered an additional 8 hours of training each year after the demonstration project was implemented. According to a DOC official, prior to conversion to the demonstration project, DOC provided a detailed briefing to approximately 400 employees to increase employee understanding of the project. In addition, employees could schedule one-on-one counseling sessions with human resources staff to discuss individual issues and concerns. Some of the demonstration projects, including China Lake, DOC, and NAVSEA’s Dahlgren and Newport divisions, managed training costs by relying on current employees to train other employees on the demonstration project. According to demonstration project officials, while there are still costs associated with developing and delivering in-house training, total training costs are generally reduced by using employees rather than hiring contractors to train employees. For example, China Lake took a “train the trainer” approach by training a group of employees on the new flexibilities in the demonstration project and having those employees train other employees. According to a demonstration project official, an added benefit of using employees to train other employees is that if the person leading the training is respected and known, then the employees are more likely to support the demonstration project. The official said that one drawback is that not all employees are good teachers, so their skills should be carefully considered. AcqDemo used a combination of contractors and in-house training to implement its training strategy. According to an AcqDemo official, the relatively higher per demonstration project employee costs in years 4 and 5 are a result of AcqDemo’s recognition that more in-depth and varied training was needed for current AcqDemo employees to refresh their proficiency in the system; for new participants to familiarize them with appraisal and payout processes; as well as for senior management, pay pool managers and members, and human resources personnel to give them greater detail on the process. As a part of implementing a pay for performance system, some of the demonstration projects installed new or updated existing automated personnel systems. Demonstration projects reported that total costs related to designing, installing, and maintaining automation and data systems ranged from an estimated $125,000 at NAVSEA’s Dahlgren division to an estimated $4.9 million at AcqDemo, as shown in table 7. To manage data system costs, some demonstration projects modified existing data systems rather than designing completely new systems to meet their information needs. For example, NAVSEA’s divisions worked together to modify DOD’s existing Defense Civilian Personnel Data System to meet their needs for a revised performance appraisal system. Similarly, DOC imported the performance appraisal system developed by NIST and converted the payout system to a Web-based system. While NIST reported that it incurred no direct costs for automation and data systems, officials told us it used in-house employees, NIST’s Information Technology Laboratory staff, to develop a data system to automate performance ratings, scores, increases, and awards. NRL used a combination of in-house employees and contractors to automate its performance management system. While reported automation and data systems’ costs were higher for NRL than for most other demonstration projects, NRL reports that its automated system has brought about savings each year of an estimated 10,500 hours of work, $266,000, and 154 reams of paper since the demonstration project was implemented in 1999. We have observed that a performance management system should have adequate safeguards to ensure fairness and guard against abuse. One such safeguard is to ensure reasonable transparency and appropriate accountability mechanisms in connection with the results of the performance management process. To this end, NIST, NAVSEA’s Newport Division, NRL, and AcqDemo publish information for employees on internal Web sites about the results of performance appraisal and pay decisions, such as the average performance rating, the average pay increase, and the average award for the organization and for each individual unit. Other demonstration projects publish no information on the results of the performance cycle. NAVSEA’s Newport division publishes results of its annual performance cycle. Newport aggregates the data so that no individual employee’s rating or payout can be determined to protect confidentiality. Employees can compare their performance rating category against others in the same unit, other units, and the entire division, as shown in figure 9. Until recently, only if requested by an employee would NIST provide information such as the average rating, pay increase, and award amount for the employee’s pay pool. To be more open, transparent, and responsive to employees, NIST officials told us that in 2003, for the first time, NIST began to publish the results of the performance cycle on its internal Web site. NIST published averages of the performance rating scores, as shown in figure 10, as well as the average recommended pay increase amounts and the average awards by career path, for the entire organization, and for each organizational unit. According to one NIST official, the first day the results were published on the internal Web site, the Web site was visited more than 1,600 times. Publishing the results of the performance management process can provide employees with the information they need to better understand the performance management system. However, according to an official, DOC does not currently publish performance rating and payout results even though DOC’s third year evaluation found that demonstration project participants continued to raise concerns that indicated their lack of understanding about the performance appraisal process. According to the evaluation, focus group and survey results indicated the need for increased understanding on topics such as how pay pools work, how salaries are determined, and how employees are rated. Employees were also interested in knowing more about the results of the performance appraisal process. One union representative told us that a way to improve the demonstration project would be to publish information. In past years, according to employee representatives, some employees and union representatives at DOC have used the Freedom of Information Act to request and obtain the information. According to a DOC official, DOC plans to discuss the publication of average scores by each major unit and look for options to increase employee understanding of the performance management system at upcoming Project Team and Departmental Personnel Management Board meetings. Linking pay to performance is a key practice for effective performance management. As Congress, the administration, and federal agencies continue to rethink the current approach to federal pay to place greater emphasis on performance, the experiences of personnel demonstration projects can provide insights into how some organizations within the federal government are implementing pay for performance. The demonstration projects took different approaches to using competencies to evaluate employee performance, translating performance ratings into pay increases and awards, considering employees’ current salaries in making performance pay decisions, managing costs of the pay for performance systems, and providing information to employees about the results of performance appraisal and pay decisions. These different approaches were intended to enhance the success of the pay for performance systems because the systems were designed and implemented to meet the demonstration projects’ unique cultural and organizational needs. We strongly support the need to expand pay for performance in the federal government. How it is done, when it is done, and the basis on which it is done can make all the difference in whether such efforts are successful. High-performing organizations continuously review and revise their performance management systems to achieve results, accelerate change, and facilitate two-way communication throughout the year so that discussions about individual and organizational performance are integrated and ongoing. To this end, these demonstration projects show an understanding that how to better link pay to performance is very much a work in progress at the federal level. Additional work is needed to strengthen efforts to ensure that performance management systems are tools to help the demonstration projects manage on a day-to-day basis. In particular, there are opportunities to use organizationwide competencies to evaluate employee performance that reinforce behaviors and actions that support the organization's mission, translate employee performance so that managers can make meaningful distinctions between top and poor performers with objective and fact- based information, and provide information to employees about the results of the performance appraisals and pay decisions to ensure that reasonable transparency and appropriate accountability mechanisms are in place. We provided drafts of this report to the Secretaries of Defense and Commerce for their review and comment. DOD’s Principal Deputy, Under Secretary of Defense for Personnel and Readiness, provided written comments, which are presented in appendix III. DOD concurred with our report and stated that it is a useful summary of the various approaches that the demonstration projects undertook to implement their pay for performance systems and that their experiences provide valuable insight into federal pay for performance models. DOD also noted that the NAVSEA demonstration project training and automation cost data are estimated rather than actual costs. We made the appropriate notation. While DOC did not submit written comments, DOC’s Classifcation, Pay, and HR Demonstration Program Manager provided minor technical clarifications and updated information. We made those changes where appropriate. We provided a draft of the report to the Director of OPM for her information. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its date. At that time, we will provide copies of this report to other interested congressional parties, the Secretaries of Defense and Commerce, and the Director of OPM. We will also make this report available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact me or Lisa Shames on (202) 512-6806. Other contributors are acknowledged in appendix IV. To meet our objective to identify the approaches that selected personnel demonstration projects have taken to implement their pay for performance systems, we chose the following demonstration projects: the Navy Demonstration Project at China Lake (China Lake), the National Institute of Standards and Technology (NIST), the Department of Commerce (DOC), the Naval Research Laboratory (NRL), the Naval Sea Systems Command Warfare Centers (NAVSEA) at Dahlgren and Newport, and the Civilian Acquisition Workforce Personnel Demonstration Project (AcqDemo). We selected these demonstration projects based on our review of the projects and in consultation with the Office of Personnel Management (OPM). Factors we considered in selecting these demonstration projects included the type of pay for performance system, type of agency (defense or civilian), status of the project (ongoing, permanent, or complete), date the project was implemented, and number and type of employees covered (including employees covered by a union). To identify the different approaches that the demonstration projects took in implementing their pay for performance systems, we analyzed Federal Register notices outlining the major features and regulations for each demonstration project, operating manuals, annual and summative evaluations, employee attitude survey results, project briefings, training materials, rating and payout data, cost data, rating distribution data from OPM’s Central Personnel Data File (CPDF), and other relevant documentation. In addition, we spoke with cognizant officials from OPM; demonstration project managers, human resource officials, and participating supervisors and employees; and union and other employee representatives. We prepared a data collection instrument to obtain actual and estimated cost data from the six demonstration projects. We tested the instrument with a demonstration project official to ensure that the instrument was clear and comprehensive. After revising the instrument based on the official’s recommendations, we administered the instrument via e-mail and followed up with officials via telephone, as necessary. Officials from the six demonstration projects provided actual cost data where available and estimated data when actual data were not available. Cost data reported are actual unless otherwise indicated. We adjusted cost data for inflation using the Consumer Price Index, in 2002 dollars. We provide average salary data, as reported by the demonstration projects and OPM without verification by GAO. The aggregated average salary data do not allow us to determine whether total salary costs for the demonstration projects are higher or lower than their General Schedule (GS) comparison groups. We did not independently evaluate the effectiveness of the demonstration projects or independently validate the data provided by the agencies or published in the evaluations. We assessed the reliability of cost, salary, rating, and performance pay distribution data provided by the demonstration projects by (1) performing manual and electronic testing of required data elements, (2) reviewing existing information about the data, and (3) interviewing agency officials knowledgeable about the data. We determined that the data were sufficiently reliable for the purposes of this report, with the exception of the DOC salary data, which we do not present. Based on our review of the DOC salary data we determined that the data were not adequate for use in our comparative analyses of salary growth. An evaluation of the DOC demonstration project reported that data were missing in critical fields, such as pay and performance scores. We did not independently verify the CPDF data for September 30, 2002. However, in a 1998 report (OPM's Central Personnel Data File: Data Appear Sufficiently Reliable to Meet Most Customer Needs, GAO/GGD-98- 199, Sept. 30, 1998), we reported that governmentwide data from the CPDF for key variables, such as GS-grade, agency, and career status, were 97 percent or more accurate. However, we did not verify the accuracy of employee ratings. We performed our work in the Washington, D.C., metropolitan area from December 2002 through August 2003 in accordance with generally accepted government auditing standards. The Navy Demonstration Project was to develop an integrated approach to pay, performance appraisal, and allow greater managerial control over personnel functions; and expand the opportunities available to employees through a more responsive and flexible personnel system. Competencies: Competencies are tailored to an individual’s position. The employees and their supervisors are to develop performance plans, which identify the employees’ responsibilities and expected results. In addition, all supervisors are to include certain management competencies from a menu of managerial factors that best define their responsibilities, such as developing objectives, organizing work, and selecting and developing people. Feedback: Supervisors are to conduct two progress reviews of employees’ performance, set at 5 and 9 months in the performance cycle. Self-assessment: Employees are strongly encouraged to list accomplishments for their supervisors’ information when determining the performance rating. Levels of performance rating: The levels are highly successful (rating levels 1 or 2), fully successful (rating level 3), or less than fully successful (rating levels 4 or 5). Second-level review: Second-level supervisors are to review all assessments. In addition, an overall assessment of highly successful is to be sent to the appropriate department’s Performance Review Board for the assignment of an official rating of “1” or “2.” The supervisor and reviewer are to assign a “4” or “5” rating based on a problem-solving team’s findings and a personnel advisor’s input. Grievance process: Generally, employees may request reconsideration of their ratings in writing to the third-level supervisor and indicate why a higher rating is warranted and what rating is desired. The third-level supervisor can either grant the request or request that a recommending official outside of the immediate organization or chain of authority be appointed. The employee is to receive a final decision in writing within 21 calendar days. Reduction in force. To allow for increased retention of high-performing employees at all levels by ranking employees based on performance for retention standings. Salary flexibility. To set entry-level salaries to take into account market conditions. A demonstration project evaluation reported the following effects. Employees viewed performance improvements within their control and reported increased recognition of individual performance. The perception of a pay-performance link was significantly strengthened under the demonstration pay for performance system, but not in the comparison group. Pay satisfaction increased slightly at the demonstration sites and declined at the control laboratories. Employees and supervisors cited improved communication, a more objective focus, and clearer performance expectations as major system benefits. Employees and supervisors perceived their performance appraisal system to be more flexible than the comparison group, to focus more on actual work requirements, and thus to be more responsive to laboratory needs. Employees at the demonstration project reported having more input into the development of performance plans than employees in the comparison group. http://www.nawcwpns.navy.mil/~hrd/demo.htm (Last accessed on Nov. 7, 2003) http://www.opm.gov/demos/main.asp (Last accessed on Nov. 7, 2003) The NIST demonstration project, formerly known as the National Bureau of Standards, was to improve hiring and allow NIST to compete more effectively for high- motivate and retain staff, strengthen the manager’s role in personnel management, and increase the efficiency of personnel systems. Competencies: Competencies, called “critical elements,” are based on the individual position. Employee performance plans are to have a minimum of two and a maximum of six critical elements, which the supervisor weights, based on the importance of the critical element, the time required to accomplish the critical element, or both. Managers’ and supervisors’ performance plans are to include a critical element on diversity and it must be weighted at least 15 points. Feedback: Supervisors are to conduct midyear reviews of all employees to discuss accomplishments or deficiencies and modify the initial performance plans, if necessary. Self-assessment: Employees are to submit lists of accomplishments for their supervisors’ information when determining the performance ratings. Levels of performance rating: The levels are “eligible” or “unsatisfactory.” On a scale of 0 to 100, employees who receive scores over 40 are rated eligible and those with scores below 40 unsatisfactory. Second-level review: Pay pool managers are to review recommended scores from supervisors and select a payout for each employee. Pay pool managers are to present the decisions to the next higher official for review if the pay pool manager is also a supervisor. The organizational unit director is to approve awards and review all other decisions. Grievance procedure: Employees may grieve their performance ratings, scores, and pay increases by following DOC’s Administrative Grievance Procedure or appropriate negotiated grievance procedures. Reduction in force. To credit an employee with an overall performance score in the top 10 percent of scores within a peer group with 10 additional years of service for retention purposes. Supervisory differential. To establish supervisory intervals within a pay band that allow for a maximum rate up to 6 percent higher than the maximum rate of the nonsupervisory intervals within the pay band. Hiring flexibility. To provide flexibility in setting initial salaries within pay bands for new appointees, particularly for hard-to-fill positions in the Scientific and Engineering career path. Extended probation. To require employees in the Scientific and Engineering career path to serve a probationary period of 1 to 3 years. A demonstration project evaluation reported the following effects. Recruitment bonuses were used sparingly but successfully to attract candidates who might not have accepted federal jobs otherwise. NIST has become more competitive with the private sector and employees are less likely to leave for reasons of pay. NIST was able to provide significant performance-based awards, some with merit increases as high as 20 percent. NIST succeeded in retaining more of its high performers than the comparison group. Managers reported significantly increased authority over hiring and pay decisions. Managers reported that they felt significantly less restricted by personnel rules and regulations than other federal managers. http://www.opm.gov/demos/main.asp (Last accessed on Nov. 7, 2003) The DOC demonstration project was to test whether the interventions of the NIST demonstration project could be successful in environments with different missions and different organizational hierarchies. Competencies: Competencies, called “critical elements,” are tailored to each individual position. Performance plans are to have a minimum of two and a maximum of six critical elements. The supervisor is to weight each critical element, based on the importance of the element, the time required to accomplish it, or both, so that the total weight of all critical elements is 100 points. Organizationwide benchmark performance standards are to define the range of performance, and the supervisor may add supplemental performance standards to a performance plan. Performance plans for managers and supervisors are to include critical elements such as recommending or making personnel decisions; developing and appraising subordinates; fulfilling diversity, equal opportunity, and affirmative action responsibilities; and program and managerial responsibilities. Feedback: Supervisors are to conduct midyear reviews of all employees to discuss accomplishments or deficiencies and modify the initial performance plans, if necessary. Self-assessment: Employees are to submit lists of accomplishments for their supervisors’ information when determining the performance ratings. Levels of performance rating: The levels are “eligible” or “unsatisfactory.” On a scale of 0 to 100, employees who receive scores over 40 are rated eligible and those with scores below 40 unsatisfactory. Second-level review: The pay pool manager is to review recommended scores from subordinate supervisors and select a payout for each employee. The pay pool manager is to present the decisions to the next higher official for review if the pay pool manager is also a supervisor. Grievance procedure: Employees may request reconsideration of performance decisions, excluding awards, by the pay pool manager through DOC’s Administrative Grievance Procedure or appropriate negotiated grievance procedures. Reduction in force. To credit employees with performance scores in the top 30 percent of a career path in a pay pool with 10 additional years of service for retention purposes. Other employees rated “eligible” receive 5 additional years of service for retention credit. Supervisory performance pay. To offer employees who spend at least 25 percent of their time performing supervisory duties pay up to 6 percent higher than the regular pay band. Probationary period. To require a 3-year probationary period for newly hired science and engineering employees performing research and development duties. A demonstration project evaluation reported the following effects. The pay for performance system continues to exhibit a positive link between pay and performance. For example, in year 4 of the demonstration project, employees with higher performance scores were more likely to receive pay increases and on average received larger pay increases than employees with lower scores. Some of the recruitment and staffing interventions have been successful. For example, supervisors are taking advantage of their ability to offer more flexible starting salaries. Additionally, the demonstration project has expedited the classification process. DOC’s evaluator recommended that DOC should more fully implement the recruitment and staffing interventions. The 3-year probationary period for scientists and engineers continues to be used, but assessing its utility remains difficult. On the other hand, some retention interventions receive little use or have not appeared to affect retention. For example, the supervisor performance pay intervention is not affecting supervisor retention. http://ohrm.doc.gov/employees/demo_project.htm (Last accessed Nov. 7, 2003) http://www.opm.gov/demos/main.asp (Last accessed Nov. 7, 2003) The NRL demonstration project was to provide increased authority to manage human resources, enable NRL to hire the best qualified employees, compensate employees equitably at a rate that is more competitive with the labor market, and provide a direct link between levels of individual contribution and the compensation received. Competencies: Each career path has two to three “critical elements.” Each critical element has generic descriptors that explain the type of work, degree of responsibility, and scope of contributions. Pay pool managers may weight critical elements and may establish supplemental criteria. Feedback: Supervisors and employees are to, on an ongoing basis, hold discussions to specify work assignments and performance expectations. The supervisor or the employee can request a formal review during the appraisal process. Self-assessment: Employees are to submit yearly accomplishment reports for the supervisors’ information when determining the performance appraisals. Levels of performance rating: The levels are acceptable or unacceptable. Employees who are rated acceptable are then determined to be “over-compensated,” “under-compensated,” or within the “normal pay range,” based on their contribution scores and salaries. Second-level review: The pay pool panel and pay pool manager are to compare element scores for all of the employees in the pay pool; make adjustments, as necessary; and determine the final contribution scores and pay adjustments for the employees. Grievance procedure: Employees can grieve their appraisals through a two-step process. Employees are to first grieve their scores in writing, and the pay pool panel reviews the grievances and makes recommendations to the pay pool manager, who issues decisions in writing. If employees are not satisfied with the pay pool manager’s decisions, they can then file formal grievances according to NRL’s formal grievance procedure. Reduction in force. To credit an employee’s basic Federal Service Computation Date with up to 20 years based on the results of the appraisal process. Hiring flexibility. To provide opportunities to consider a broader range of candidates and flexibility in filling positions. Extended probationary period. To extend the probationary period to 3 years for certain occupations. A demonstration project evaluation reported the following effects. From 1996 to 2001: Managers’ satisfaction with authority to determine employees’ pay and job classification increased from 10 percent of managers to 33 percent. Employees’ satisfaction with opportunities for advancement increased from 26 percent to 41 percent. The perceived link between pay and performance is stronger under the demonstration project and increased from 41 percent to 61 percent. On the other hand, the percentage of employees who agreed that other employers in the area paid more than the government for the kind of work that they do increased from 67 to 76 percent. http://hroffice.nrl.navy.mil/personnel_demo/index.htm (Last accessed on Nov. 7, 2003) http://www.opm.gov/demos/main.asp (Last accessed on Nov. 7, 2003) The NAVSEA demonstration project was to develop employees to meet the changing needs of the organization; help employees achieve their career goals; improve performance in current positions; retain high performers; and improve communication with customers, colleagues, managers, and employees. Competencies: Each division may implement regulations regarding the competencies and criteria by which employees are rated. NAVSEA’s Dahlgren division uses three competencies for all employees, and the Newport division uses eight competencies. Feedback: Each division may implement regulations regarding the timing and documentation of midyear feedback. Dahlgren requires at least one documented feedback session at midyear. Beginning in fiscal year 2004, Newport requires a documented midyear feedback session. Self-assessment: Each division has the flexibility to determine whether and how employees document their accomplishments. Dahlgren requires employees to provide summaries of their contributions for their supervisors’ information. Newport encourages employees to provide self- assessments. Levels of performance rating: All of the divisions use the ratings “acceptable” and “unacceptable.” Second-level review: Divisions are to design the performance appraisal and payout process. Supervisors at Dahlgren’s division and department levels review ratings and payouts to ensure that the competencies are applied uniformly and salary adjustments are distributed equitably. At Newport, second-level supervisors review recommendations by direct supervisors, make changes to achieve balance and equity within the organization, then submit the recommendations to pay pool managers, who are to go through the same process and forward the recommendations to the department head for final approval. Grievance procedure: Divisions are to design their grievance procedures. Dahlgren and Newport have informal and formal reconsideration processes. In Dahlgren’s informal process, the employee and supervisor are to discuss the employee’s concern and reach a mutual understanding, and the pay pool manager is to approve any changes. If the employee is not satisfied with the result of the informal process, the employee is to submit a formal request to the pay pool manager, who is to make the final decision. In Newport’s informal process, the employee is to submit a written request to the pay pool manager, who may revise the rating and payout decision or confirm it. If the employee is not satisfied with the result of the informal process, the employee may formally appeal to the department head, who is to render a decision. Advanced in-hire rate. To set, upon initial appointment, an individual’s pay anywhere within the band level consistent with the qualifications of the individual and requirements of the position. Scholastic achievement appointments. To employ an alternative examining process that provides NAVSEA the authority to appoint undergraduates and graduates to professional positions. A demonstration project evaluation reported the following effects. From 1996 to 2001: The percentage of people who agreed that their managers promote effective communication among different work groups increased from 31 to 43 percent. On the other hand, NAVSEA employees’ response to the statement “High performers tend to stay with this organization” stayed constant at about 30 percent during this time. Additionally, the percentage of employees who said that they have all of the skills needed to do their jobs remained consistent at 59 and 62 percent, respectively. http://www.nswc.navy.mil/wwwDL/XD/HR/DEMO/main.html (Last accessed on Nov. 7, 2003) http://www.opm.gov/demos/main.asp (Last accessed on Nov. 7, 2003) attract, motivate, and retain a high-quality acquisition workforce; achieve a flexible and responsive personnel system; link pay to employee contributions to mission accomplishment; and gain greater managerial control and authority over personnel processes. Competencies: Six core contribution “factors,” as well as “discriminators” and “descriptors,” are used to evaluate every employee. Feedback: AcqDemo requires at least one formal feedback session annually and encourages informal and frequent communication between supervisors and employees, including discussion of any inadequate contribution. Each service, agency, or organization may require one or more additional formal or informal feedback sessions. Self-assessment: Employees can provide a list of contributions for each factor. Levels of performance rating: The levels are “appropriately compensated,” “over-compensated,” and “under-compensated.” Second-level review: The supervisors and the pay pool manager are to ensure consistency and equity across ratings. The pay pool manager is to approve the employee’s overall contribution score, which is calculated based on the employee’s contribution ratings. Grievance procedure: Employees may grieve their ratings and actions affecting the general pay increase or performance pay increases. An employee covered by a negotiated grievance procedure is to use that procedure to grieve his or her score. An employee not under a negotiated grievance procedure is to submit the grievance first to the rating official, who will submit a recommendation to the pay pool panel. The pay pool panel may accept the rating official’s recommendation or reach an independent decision. The pay pool panel’s decision is final unless the employee requests reconsideration by the next higher official to the pay pool manager. That official would then render the final decision on the grievance. Voluntary emeritus program. To provide a continuing source of corporate knowledge and valuable on-the-job training or mentoring by allowing retired employees to voluntarily return without compensation and without jeopardizing retirement pay. Extended probationary period. To provide managers a length of time equal to education and training assignments outside of the supervisors’ review to properly assess the contribution and conduct of new hires in the acquisition environment. Scholastic achievement appointment. To provide the authority to appoint degreed candidates meeting desired scholastic criteria to positions with positive education requirements. Flexible appointment authority. To allow an agency to make a modified term appointment to last from 1 to 5 years when the need for an employee’s services is not permanent. A demonstration project evaluation reported the following effects. Attrition rates for over-compensated employees increased from 24.1 in 2000 to 31.6 percent in 2002. Attrition rates for appropriately compensated employees increased from 11.5 in 2000 to 14.1 percent in 2002. Attrition rates for under-compensated employees decreased from 9.0 in 2000 to 8.5 in 2001 and then increased to 10.2 percent in 2002. Increased pay-setting flexibility has allowed organizations in AcqDemo to offer more competitive salaries, which has improved recruiting. Employees’ perception of the link between pay and contribution increased, from 20 percent reporting that pay raises depend on their contribution to the organization’s mission in 1998 to 59 percent in 2003. http://www.acq.osd.mil/acqdemo/ (Last accessed on Nov. 7, 2003) http://www.opm.gov/demos/index.asp (Last accessed on Nov. 7, 2003) In addition to the individuals named above, Michelle Bracy, Ron La Due Lake, Hilary Murrish, Adam Shapiro, and Marti Tracy made key contributions to this report. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
There is a growing understanding that the federal government needs to fundamentally rethink its current approach to pay and to better link pay to individual and organizational performance. Federal agencies have been experimenting with pay for performance through the Office of Personnel Management's (OPM) personnel demonstration projects. GAO identified the approaches that selected personnel demonstration projects have taken to implement their pay for performance systems. These projects include: the Navy Demonstration Project at China Lake (China Lake), the National Institute of Standards and Technology (NIST), the Department of Commerce (DOC), the Naval Research Laboratory (NRL), the Naval Sea Systems Command Warfare Centers (NAVSEA) at Dahlgren and Newport, and the Civilian Acquisition Workforce Personnel Demonstration Project (AcqDemo). We selected these demonstration projects based on factors such as status of the project and makeup of employee groups covered. We provided drafts of this report to officials in the Department of Defense (DOD) and DOC for their review and comment. DOD provided written comments concurring with our report. DOC provided minor technical clarifications and updated information. We provided a draft of the report to the Director of OPM for her information. The demonstration projects took a variety of approaches to designing and implementing their pay for performance systems to meet the unique needs of their cultures and organizational structures. GAO strongly supports the need to expand pay for performance in the federal government. How it is done, when it is done, and the basis on which it is done can make all the difference in whether such efforts are successful. High-performing organizations continuously review and revise their performance management systems. These demonstration projects show an understanding that how to better link pay to performance is very much a work in progress at the federal level. Additional work is needed to strengthen efforts to ensure that performance management systems are tools to help them manage on a day-to-day basis. In particular, there are opportunities to use organizationwide competencies to evaluate employee performance that reinforce behaviors and actions that support the organization's mission, translate employee performance so that managers make meaningful distinctions between top and poor performers with objective and fact-based information, and provide information to employees about the results of the performance appraisals and pay decisions to ensure reasonable transparency and appropriate accountability mechanisms are in place.
The Department of Education’s basic functions are to provide financial resources, primarily through student loans and grants for higher education; provide research and information on best practices in education; and ensure that publicly funded schools and education programs observe civil rights laws. It administers a variety of grant and contract programs that provide aid for disadvantaged children; aid for children and adults with disabilities; student loans and grants for higher education; vocational and adult education; and research and evaluation, as well as a variety of smaller programs, such as the gifted and talented education program. Secondary Education Act, which helps support the education of over 6 million disadvantaged children in more than 50,000 schools nationwide—about one-half of the nation’s public schools—and special education programs that assist over 5 million children with disabilities from birth through age 21 in meeting their educational and developmental needs. For fiscal year 1997, the Department has an estimated budget of $29.4 billion and is authorized 4,613 full-time-equivalent (FTE) staff-years. The administration’s fiscal year 1998 budget request is for $39.5 billion and 4,560 FTE staff. This represents an increase of about $10 billion, $5 billion of which the administration wants to use to assist states in acquiring funds for school construction. The Department’s spending for education leverages well beyond its budget authority. For example, the fiscal year 1998 budget request of $12.7 billion for postsecondary student aid programs is expected to generate $47.2 billion for more than 8 million students. And $4 billion in federal appropriations for special education is expected to leverage about $29.5 billion in state and local funds. Through its student aid programs, the Department has enabled millions of students to attend postsecondary educational institutions; however, the current economic conditions make continuing to ensure such access difficult. Rising tuition, coupled with the shift to providing loans instead of grants, could result in fewer low-income and minority students’ staying in college. At the same time the Department is concerned with access, its ongoing challenge is to improve its processes to ensure financial accountability in its postsecondary student aid programs, particularly FFELP, FDLP, and the Pell Grant Program. In 1990, we designated the student financial aid program one of 17 federal high-risk programs likely to cause the loss of substantial amounts of federal money because of their vulnerabilities to waste, fraud, abuse, and mismanagement. Although the Department has acted to correct many problems and improve program controls, significant vulnerabilities remain, and we have included the student financial aid program in our 1997 list of 25 high-risk programs. of education is closely linked to unemployment. In addition, level of education is a strong determinant of wage earnings. For example, college graduates earn much more than those with only a high school education, and the differential has been increasing. According to Department data, in 1985 the median annual income of full-time male workers 25 years and over was $41,892 for college graduates and $26,609 for those with high school diplomas only, a difference of $15,283. By 1994, the difference between these two groups had grown to $21,191. Low-income and minority students have traditionally been underrepresented among college students, and access is becoming more and more problematic as the cost of attending college increases. For example, as we reported to the Congress in August 1996, a public college education has become less affordable in the last 15 years as tuition has risen nearly three times as fast as household income. The average tuition for full-time in-state students increased from $804 per year to $2,689, or 234 percent, and median household income, from $17,710 to $32,264, or 82 percent. Students and their families have responded to this “affordability gap” by drawing more heavily on their own financial resources and increasing their borrowing. For example, the annual average student loan at 4-year public schools rose from $518 per full-time student in fiscal year 1980 to $2,417 in fiscal year 1995, an increase of 367 percent, which is almost five times the 74-percent increase in the cost of living—as measured by the consumer price index—for the same period. If this trend continues, rising tuition levels may deter many students from attending college. The Department’s primary mechanism for ensuring access to postsecondary institutions is the federal student financial aid programs—principally FFELP, FDLP, and the Pell Grant Program. While federal student financial aid has been substantial in the past, recent trends may inhibit broader college access. A growing proportion of federal aid has taken the form of loans rather than grants since the 1970s. For example, from 1977 to 1980, grant aid exceeded loan aid; since 1985, however, loan aid has been about twice the amount of grant aid. With federal grant aid declining in relative terms, students and their families have had to shoulder a greater share of college expenses. Many policymakers have expressed concern that this trend in college costs and in financial aid patterns, which increases students’ net costs for higher education, has diminished college access—both entry and attendance through graduation—for low-income students. Our work supports this belief with respect to attendance through graduation. We concluded from our work that financial aid packages with relatively high grant levels may improve low-income students’ access to higher education more than packages that rely more on loans. In addition, our analysis indicated that the sooner low-income students receive grant assistance, the more likely they are to stay in college. We found that grants were most effective in reducing low-income students’ dropout probabilities in the first year. For these students, an additional $1,000 grant reduced the dropout probability by 23 percent. In the second year, the additional grant reduced dropout probability by 8 percent, while in the third year it had no statistically discernable effect. Therefore, we believe that restructuring federal grant programs to feature frontloading could improve low-income students’ dropout rates with little or no change in each student’s overall 4-year allocation of grants and loans. We suggested that, if the Congress was interested in increasing the number of low-income students who stay in college, it could direct the Department to conduct a pilot program for frontloading federal grants. The Congress has yet to act on this suggestion. Although major federal student aid programs, such as FFELP, FDLP, and the Pell Grant Program, have succeeded in providing students access to billions of dollars for postsecondary education, our work has shown that the Department has been less successful in protecting the financial interests of U.S. taxpayers. For example, in fiscal year 1996, while the Department made more than $40 billion available in student aid, the federal government paid out over $2.5 billion to make good its guarantee on defaulted student loans. agencies be conducted annually rather than every 2 years. The Department also has planned and taken a number of actions to correct its financial accountability problems, such as reorganizing the Office of Postsecondary Education to permit it to better administer and oversee federal student aid programs and developing several new information systems to provide more accurate and timely information. Many of the Department’s actions are likely to have played a major role in reducing the amount of student loan defaults from $2.7 billion in fiscal year 1992 to $2.5 billion in fiscal year 1996 and in increasing collections on defaulted student loans from $1 billion in fiscal year 1992 to $2.1 billion in fiscal year 1996. However, the Department’s actions have not completely resolved many of the underlying problems, and, therefore, vulnerabilities remain. At the core of the Department’s financial accountability difficulties are persistent problems with the individual student aid programs’ processes, structure, and management. These problems include (1) overly complex processes, (2) inadequate financial risk to lenders or state guaranty agencies for defaulted loans, and (3) management shortcomings. Our work has shown that the student aid programs have many participants and involve complicated, cumbersome processes. Three principal participants—students, schools, and the Department of Education—are involved in all the financial aid programs; two additional participants—lenders and guaranty agencies—also have roles in FFELP. In general, each student aid program has its own processes, which include procedures for student applications, school verifications of eligibility, and lenders or other servicing organizations that collect payments. Further, the introduction of FDLP, originally viewed as a potential replacement for FFELP, has added a new dimension of complexity. Rather than replacing FFELP as initially planned, FDLP now operates along side it. Essentially, this means that the Department has two programs that are similar in purpose but that operate differently. programs now serve more students from low-income families and those attending proprietary schools than in the past. As the number of these higher-risk borrowers has increased, so has the number of defaults. Both of these conditions enhance access for low-income students, yet a tension exists because they jeopardize financial accountability. Management shortcomings also continue as a major problem and contribute to the Department’s financial accountability difficulties. In the past, congressional hearings and investigations, reports by the Department’s OIG, our reports, and other studies and evaluations have shown that the Department (1) did not adequately oversee schools that participated in the programs; (2) managed each title IV program through a separate administrative structure, with poor or little communication among programs; (3) used inadequate management information systems that contained unreliable data; and (4) did not have sufficient and reliable student loan data to determine the Department’s liability for outstanding loan guarantees. These problems cannot be quickly or easily fixed. The Department has taken many actions, such as improving gatekeeping procedures for determining which schools may participate, to address these problems. However, the Department’s management problems, such as administrative inefficiencies resulting from the separate administrative structures used to manage each title IV program, have not yet been resolved. We testified before this Subcommittee last June on issues related to “gatekeeping”—the process for ensuring that students are receiving title IV aid to attend only schools that provide quality education and training. At that time, we noted the history of concern about the integrity of title IV programs stemming from our work, that of the Department’s OIG, and the Congress—work that led to the conclusion that extensive abuse and mismanagement existed in these programs. For example, some schools received Pell grant funds for students who never applied for the grants or enrolled in or attended the schools. In one instance, a chain of proprietary schools falsified student records and misrepresented the quality of its educational programs to increase its revenues from students receiving Pell grants. that schools cannot exceed and still participate in the title IV programs. Legislation also has strengthened the role of the Department, states, and accrediting agencies—referred to as “the triad”—in determining school eligibility. HEA recognizes the triad as having shared responsibility for gatekeeping. As part of this triad, the Department (1) verifies schools’ eligibility and certifies their financial and administrative capacity and (2) grants recognition to accrediting agencies. The Department has improved the gatekeeping process by such actions as requiring all schools to have annual financial and compliance audits, increasing the number of program reviews, hiring additional staff to conduct the reviews, and beginning to develop a new database of school information to help Department staff monitor schools’ performance. Nevertheless, as we reported in our recent high-risk report, several weaknesses continue to cause concern. For example, the Department’s OIG identified problems with the recertification process that could increase the likelihood that schools not in compliance with eligibility requirements are able to continue to participate in title IV programs. A review of a sample of Department recertification actions showed that 27 percent of schools sampled had violations such as unpaid debts or failures to meet financial responsibility requirements. The Department acknowledged that some recertifications should not have been made and stated that it was taking action to make current financial data available for future recertification reviews. The Department is also implementing a gatekeeping initiative designed to focus resources on high-risk schools: the Institutional Participation and Oversight Service (IPOS) Challenge. Under the IPOS Challenge, the Department plans to use a computer model to identify schools for review on the basis of their risk of noncompliance. Because this initiative has only recently been undertaken, it is too soon to assess its effectiveness. Excellence in education in America has become a major concern for the public, and both the Congress and the Department have promoted initiatives to improve the quality of American education. These efforts include improving the quality of the physical environment in which students learn, ensuring schools have the ability to use the technology needed to provide children with an education appropriate for the 21st century, creating and promoting national standards to shape curriculum and guide test development in order to measure reading and math achievement, supporting efforts to improve the quality of teachers and teacher preparation programs, and ensuring equal access to education. Major legislative efforts, such as Goals 2000: Educate America Act, the Improving America’s School Act, and the School-to-Work Opportunities Act, are examples of efforts focusing on improving the quality of America’s public education. Because the federal role in funding elementary and secondary education is relatively small, and states and local governments have the primary responsibility for and control of education programs, the Department faces a significant challenge in ensuring access and promoting excellence. Its tools are providing leadership, financial leverage, and technical assistance and information. The Department exercises leadership by shining a spotlight on important national education issues, facilitating communication on quality issues, and fostering intergovernmental and public/private partnerships. However, when one considers how it leverages resources and provides technical assistance and information, the extent to which Department funds are fostering excellence and are being spent efficiently and effectively is unclear. Two questions arise: Does the Department of Education know if its programs are working? And does the Department have the resources to manage its funds and provide the needed information and technical assistance? The Department is responsible for funding over $22 billion in elementary and secondary programs, including title 1, special education, vocational education, adult education, and Safe and Drug Free Schools. A major challenge facing the Department is ensuring that these programs are providing the intended outcomes. To do this the Department’s programs must have clearly defined objectives and complete, accurate, and timely program data. $7.7 billion appropriated in fiscal year 1997. Its purpose is to promote access to and equity in education for low-income students. The Congress modified the program in 1994, strengthening its accountability provisions and encouraging the concentration of funds to serve more disadvantaged children. At this time, the Department does not have the information it needs to determine whether the funding is being targeted as intended. Although the Department has asked for $10 million in its fiscal year 1998 budget request to evaluate the impact of title 1, it has only just begun a small study of selected school districts to look at targeting so that necessary mid-course modifications can be identified. The ultimate impact of the 1994 program modifications could be diminished if the funding changes are not being implemented as intended. As another example, we found in our work on the programs funded under the Adult Education Act that the State Grant Program, which funds local programs intended to address the educational needs of millions of adults, had difficulty ensuring that the programs met these needs. The lack of clearly defined program objectives was one of the reasons for the difficulty. The broad objectives of the State Grant Program give the states flexibility to set their own priorities but, as some argue, they do not provide states with sufficient direction for measuring results. Amendments to the act required the Department to improve accountability by developing model indicators that states could adopt and use to evaluate local programs. However, experts disagree about whether developing indicators will help states to define measurable program objectives, evaluate local programs, and collect more accurate data. Recently, we have been examining two of the most basic elements of education—the financing systems that undergird public education and the buildings within which education takes place. For example, in our school facilities series, we documented that officials estimated that a third of our nation’s schools had serious facilities problems and that it would take $112 billion to bring our schools into good overall condition. In February, the administration used our reports as the basis for proposing the Partnership to Rebuild America’s Schools Act, which, if enacted, would be administered by the Department. Several members of the Congress have raised issues associated with this proposed solution to improve schools’ conditions, such as whether the types of financial and information management problems that we discussed earlier regarding postsecondary federal financial aid programs would develop in the administration of this new program, whether the Department has qualified staff to administer the program, and whether information systems to monitor it and account for the funds are available and operational. The administration has also been promoting excellence and access by supporting technology, both through the leadership role of the President and the Office of the Secretary and through the technology programs the Department oversees. In the 1998 budget, the administration has doubled the amount of money requested for educational technology to help schools integrate technology into the curriculum in order to increase students’ technological literacy and improve the quality of instruction in core subjects. In our facilities work, we found that schools had large technology infrastructure needs that the Department’s Technology Literacy Challenge Grants would only start to address. Again, as in the school construction situation, the Department is facing a large need with relatively small amounts of funds. Adopting improved management practices can help the Department become more effective in achieving its mission of ensuring equal access to education and promoting educational excellence. Recognizing that federal agencies have not always brought the needed discipline to their management activities, the Congress in recent legislation provided a framework for addressing long-standing management challenges. The centerpiece of this framework is GPRA; other elements are the 1990 CFO Act, the 1995 Paperwork Reduction Act, and the 1996 Clinger-Cohen Act. These laws each responded to a need for more accurate, reliable information for executive branch and congressional decision-making. The Department has begun to implement these laws, which, in combination, provide it with a framework for developing (1) fully integrated information about the Department’s mission and strategic priorities, (2) performance data to evaluate progress toward the achievement of those goals, (3) the relationship of information technology investments to the achievement of performance goals, and (4) accurate and audited financial information about the costs of achieving mission outcomes. The Department has a history of management problems. In our 1993 review of the Department, we identified operational deficiencies such as lack of management vision, lack of a formal planning process, poor human resource management, and inadequate commitment to management issues by the Department leadership. In addition, financial and information management were serious problems throughout the Department, and not confined to postsecondary programs. Further, recent legislation—Goals 2000: Educate America Act, the School-to-Work Opportunities Act, and the Student Loan Reform Act—requires strong management improvements to support sound implementation. has begun discussions with the Congress and others about the challenges it faces and the kinds of support it needs to move forward in achieving its goals. According to OMB, the Department has developed a fairly broad plan. OMB raised two issues during its review of the plan: (1) the lack of specificity in program performance plans and (2) the extent to which the objectives and indicators were beyond the agency’s span of control or influence. With respect to the first concern, during the past few months the Department has been developing specific performance plans for all programs. Regarding the second concern, the Department responded to OMB by describing the nature of its education goals and by recognizing that those goals are shared by many entities. According to the Department, the plan’s objectives and indicators recognize the multilevel, intergovernmental nature of federal education support and the need for effective performance partnerships to achieve jointly sought outcomes. At the same time, the Department is updating the strategic plan and intends to differentiate those objectives and indicators that are under the Department’s full control more clearly from those that require action from state education agencies, local districts, or postsecondary institutions for effective results. administered by a single federal office, than several programs administered by several different offices. The Department is continuing its long-term efforts to streamline its operations. In its fiscal year 1998 budget request, it has proposed the elimination of 10 programs—representing more than $400 million in funding—that it believes have achieved their purpose; that duplicate other programs; or that are better supported by state, local, or private sources. Our work suggests that the Department needs to continue its efforts to eliminate duplicative or wasteful programs. The CFO Act, as expanded, requires the Department of Education as well as the 23 other major federal agencies to prepare and have audited annual financial statements beginning with those for 1996. Fiscal year 1995 was the first year the Department prepared agencywide financial statements and had them audited. However, the independent auditor could not determine whether the financial statements were fairly presented because of the insufficient and unreliable FFELP student loan data underlying the Department’s estimate of $13 billion for loan guarantees. Furthermore, because guaranty agencies and lenders have a crucial role in the implementation and ultimate cost of FFELP, the auditors stressed the need for the Department to complete steps under way for improving oversight of guaranty agencies and lenders. Until such problems are fully resolved, the Department will continue to lack the financial information necessary to effectively budget for and manage the program or to accurately estimate the government’s liabilities. In an effort to prepare auditable fiscal year 1996 financial statements, the Department’s CFO has requested data from the top 10 guaranty agencies to be used as a basis for computing the liability for loan guarantees. In addition, the Department’s independent auditor has developed agreed upon procedures to be applied by these agencies’ independent auditors to test the reliability of the requested data. Uncertainty still exists as to whether this new methodology will work; decisions on the effectiveness of the approach will be made later this year once all the data are collected. operational in November 1994, enables schools, lenders, and guaranty agencies to transmit updated loan status data to the Department. However, the Department has not yet integrated the numerous separate data systems used to support individual student aid programs, often because the various “stovepipe” systems have incompatible data in nonstandard formats. As a result, program managers often lack accurate, complete, and timely data to manage and oversee the student aid program. The lack of an integrated system also results in unnecessary manual effort on the part of users and redundant data being submitted and stored in numerous databases, resulting in additional costs to the Department as well as the chance for errors in the data. For example, a Department consultant showed that a simple address change for a college financial aid administrator would require a minimum of 19 manual and automated steps performed by a series of Department contractors who would have to enter the change in their respective systems from printed reports generated by another system. Another problem with this multiple-system environment is a lack of common identifiers for schools. Without these, tracking students and institutions across systems is difficult. The 1992 HEA amendments required the Department to establish common identifiers for students and schools not later than July 1, 1993. The Department’s current plans, however, do not call for developing and implementing common identifiers for schools until academic year 1999. Data integrity problems also exist. The lack of a fully functional and integrated title IV-wide recipient database hinders program monitoring and data quality assurance. For example, the current system cannot always identify where a student is enrolled, even after an award is made and thousands of dollars in student aid are disbursed. Although the Department has improved its student aid data systems somewhat, major improvements are still needed. Both we and OIG reported in 1996 that the Department had not adequately tested the accuracy and validity of the loan data in NSLDS. During the past year, the Department has been developing a major reengineering project, Easy Access for Students and Institutions, to redesign the entire title IV student aid program delivery system to integrate the management and control functions for the title IV programs. Although activity on this project, which had waned in previous months, has recently been renewed, carrying out the project is expected to be a long-term undertaking. The Department also faces a challenge in improving its agencywide information resources management, not just that related to the student aid programs. The legislative framework, especially that provided by the Clinger-Cohen Act, offers guidance for achieving goals in this area. The Clinger-Cohen Act requires, among other things, that federal agencies improve the efficiency and effectiveness of operations through the use of information technology by (1) establishing goals to improve the delivery of services to the public through the effective use of information technology; (2) preparing an annual report on the progress in achieving goals as part of its budget submission to the Congress; and (3) ensuring that performance measures are prescribed for any information technology that agencies use or acquire and that they measure how well the information technology supports Department programs. The Department could benefit greatly from fully implementing the law. Full implementation of the Clinger-Cohen Act would provide another opportunity to correct many of the Department’s student financial aid system weaknesses as well as to improve other information systems that support the Department’s mission. The Clinger-Cohen Act also requires that a qualified senior-level chief information officer be appointed to guide all major information resource management activities. The Department has recently appointed an Acting Chief Information Officer and, according to OMB, is to be actively recruiting an individual to fill this position on a permanent basis. This individual is responsible for developing an information resources management plan and overseeing information technology investments. In addition, the Department has highlighted the use of information technology for improved dissemination and customer service in its fiscal year 1998 budget summary. New initiatives include (1) a data warehousing effort that would simplify the internal use of databases, (2) a data conversion effort needed to comply with year 2000 requirements, and (3) a modeling project to develop an architectural framework and uniform operating standards for all Department data systems to eliminate duplication in collection and storage of data. education, the Department can promote national standards for educational performance and teacher training—but not impose them. It is expected to provide state and local education agencies flexibility in using federal funds and freedom from unnecessary regulatory burden, yet it must have enough information about programs and how money is spent to be accountable to American taxpayers for the federal funds administered at the state and local levels. It is expected to monitor programs and provide technical assistance, but its resources may not be sufficient to provide reasonable coverage. Although the Department has made progress in improving many management functions, it still has a long way to go. Over the years, our work has shown that the Department has not done a good job of minimizing risks and managing the federal investment, especially in postsecondary student aid programs. We also have concerns about whether the Department knows how well new or newly modified programs, like title 1, are being implemented; to what extent established programs are working; or whether it has the resources to effectively and efficiently provide needed information and technical assistance. Like other departments, the Department of Education needs to focus more on the results of its activities and on obtaining the information it needs for a more focused, results-oriented management decision-making process. GPRA, the CFO Act, and the Paperwork Reduction and Clinger-Cohen Acts give the Department the statutory framework it needs to manage for results. Mr. Chairman, this concludes my prepared statement. I will be happy to answer any questions that you or members of the Subcommittee might have. For more information on this testimony, call Harriet Ganson, Assistant Director, at (202) 512-9045; Jay Eglin, Assistant Director, at (202) 512-7009; or Eleanor Johnson, Assistant Director, at (202) 512-7209. Joan Denomme and Joel Marus also contributed to this statement. Managing for Results: Using GPRA to Assist Congressional and Executive Branch Decisionmaking (GAO/T-GGD-97-43, Feb. 12, 1997). School Finance: State Efforts to Reduce Funding Gaps Between Poor and Wealthy Districts (GAO/HEHS-97-31, Feb. 5, 1997). High-Risk Series: Student Financial Aid (GAO/HR-97-11, Feb. 1997). Information Technology Investment: Agencies Can Improve Performance, Reduce Costs, and Minimize Risks (GAO/AIMD-96-64, Sept. 30, 1996). Higher Education: Tuition Increasing Faster Than Household Income and Public Colleges’ Costs (GAO/HEHS-96-154, Aug. 15, 1996). Information Management Reform: Effective Implementation Is Essential for Improving Federal Performance (GAO/T-AIMD-96-132, July 17, 1996). Department of Education: Status of Actions to Improve the Management of Student Financial Aid (GAO/HEHS-96-143, July 12, 1996). School Facilities: America’s Schools Report Differing Conditions (GAO/HEHS-96-103, June 14, 1996). Financial Audit: Federal Family Education Loan Program’s Financial Statements for Fiscal Years 1994 and 1996 (GAO/AIMD-96-22, Feb. 26, 1996). School Finance: Trends in U.S. Education Spending (GAO/HEHS-95-235, Sept. 15, 1995). Student Financial Aid: Data Not Fully Utilized to Identify Inappropriately Awarded Loans and Grants (GAO/HEHS-95-89, July 11, 1995). School Facilities: America’s Schools Not Designed or Equipped for 21st Century (GAO/HEHS-95-95, Apr. 4, 1995). Higher Education: Restructuring Student Aid Could Reduce Low-Income Student Dropout Rate (GAO/HEHS-95-48, Mar. 23, 1995). Department of Education: Long-Standing Management Problems Hamper Reforms (GAO/HRD-93-47, May 28, 1993). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed the major challenges the Department of Education faces in achieving its mission to: (1) ensure access to postsecondary institutions, while at the same time protecting the financial interests of the government; and (2) promote access to and excellence in elementary, secondary, and adult education. GAO noted that: (1) although the Department has made progress in ensuring access to postsecondary education and in providing financial accountability, challenges remain, especially in providing educational access to low-income and minority students in an era of rising tuition costs and in protecting the financial interests of the federal government; (2) the student aid programs make available billions of dollars in loans and grants to promote access to education, but these programs continue to be hampered by problems with process complexity, structure, and program management; (3) the student aid process is a complicated one, it has several participants who play different roles as well as various processes for each of the grant or loan programs; (4) the federal government continues to bear a major portion of the risk for loan losses; (5) moreover, management shortcomings, especially inadequate management information systems that contain unreliable data, contribute to the Department's difficulties; (6) the Department also faces challenges in promoting access to and excellence in preschool, elementary, secondary, and adult education programs; (7) through leadership and leverage, the Department works with states and local education agencies to effect changes intended to improve the nation's educational system; (8) demonstrating accountability is dependent on having clearly defined objectives, valid assessment instruments, and accurate program data; (9) in addition, it is unclear whether the Department has the resources it needs to manage its funds, including funds for the proposed Partnership to Rebuild America's Schools Act of 1997 and for helping schools integrate technology into the curriculum to make students technologically literate; (10) similarly, the Department only has selected information on the implementation of the title 1 program, the largest single federal elementary and secondary grant program, for which $7.7 billion was appropriated in fiscal year 1997; (11) thus, the Department does not have the informational basis to determine whether mid-course changes are necessary; (12) in meeting these challenges, the Department will need to improve its management; (13) major pieces of recent legislation provide powerful tools in the form of a statutory framework for improving agency operations and accountability; and (14) the Department has made progress in implementing these laws, but work remains to be done before the goal of improved management can be reached.
In response to the large number of people displaced following World War II, the United Nations established UNHCR in 1950 with the mandate of providing protection to and seeking permanent solutions for refugees. The 1951 United Nations Convention Relating to the Status of Refugees defines a refugee as someone who, as a result of events occurring before January 1, 1951 and owing to a “well-founded fear of being persecuted for reasons of race, religion, nationality, membership of a particular social group or political opinion, is outside the country of his nationality and is unable or, owing to such fear, is unwilling to avail himself of the protection of that country; or who, not having a nationality and being outside the country of his former habitual residence, as a result of such events, is unable or, owing to such fear, is unwilling to return to it.” The 1967 Protocol Relating to the Status of Refugees expanded the definition of refugee to include all refugees covered by the definition in the 1951 Convention with no date or geographic limitation. Over time, UNHCR’s mandate has expanded beyond providing protection and humanitarian assistance, and seeking, together with governments, durable solutions for refugees, to include asylum-seekers, stateless persons, returnees, and in certain circumstances, internally displaced persons. UNHCR has over 10,000 staff located at its headquarters in Geneva, Switzerland; Budapest, Hungary; Copenhagen, Denmark; and in field offices in more than 128 countries. UNHCR’s mandate, as established by the United Nations General Assembly, is the provision of international protection, material assistance, and durable solutions to refugees and other persons of concern. According to UNHCR, registration is a key tool for providing protection to refugees. Registration in the refugee protection context is the recording, verifying, and updating of information on asylum-seekers, refugees, and other persons of concern to UNHCR. Registration is the first step in formalizing the protection relationship between the individual seeking protection and the host government in the country of asylum, UNHCR, or both. UNHCR uses registration as a tool to assist in determining which types of assistance and protection are most appropriate. UNHCR also uses Refugee Status Determination (RSD), which it describes as the legal or administrative process by which governments or UNHCR determine whether an asylum-seeker meets the definition of a refugee under international, regional, or national law. This determination involves one or more interviews by trained government officials in the country of asylum or by trained UNHCR RSD staff. RSD and registration are distinct processes, although data recorded during the individual’s registration as an asylum-seeker may be drawn upon and confirmed during RSD. If an asylum-seeker is determined to be a refugee, his or her registration will be updated from asylum-seeker to refugee, or if there was no prior registration, the refugee will be registered as such. In the majority of its operations, UNHCR records identity data in UNHCR’s database and case management system, the Profile Global Registration System (ProGres). UNHCR promotes and provides both legal and physical protection, and also provides for basic needs including food, water, shelter, and medical care in response to refugee crises worldwide. According to UNHCR, registration is a fundamental component of UNHCR’s protection activities. For instance, during registration, the organization collects information on the numbers of refugees and their specific characteristics, such as family composition or special health needs, which assists it in determining the type of assistance that an individual refugee or families may initially require. Information on registered refugee populations is also used by the organization to help determine the amount and types of assistance that specific regions may need. Registration is described by UNHCR as an ongoing process that includes inputting refugees’ biographic and biometric information into databases and case management systems and updating it. UNHCR uses ProGres to log and maintain biographic information such as names and family information, and uses the Biometric Identity Management System (BIMS) and IrisGuard to collect and store biometric information such as iris scans and fingerprints. UNHCR reports that the number of refugees registered with the organization worldwide has grown significantly in recent years. For instance, the total number of refugees was estimated to be approximately 21.3 million by the end of 2015, approximately 1.7 million more than the total reported at the end of 2014. This number includes 4.9 million refugees from Syria, 2.7 million from Afghanistan, and 1.1 million from Somalia. See figure 1 for the number of UNHCR-registered refugees by country of origin as of December 31, 2015. In addition to providing protection and humanitarian assistance to refugees and other persons of concern, UNHCR is mandated to work with governments to provide “durable solutions” to those individuals so that they may transition out of refugee status and rebuild their lives. The three durable solutions that UNHCR facilitates include (1) voluntary repatriation to a refugee’s home country, (2) integration within the country in which they are currently located, and (3) resettlement to a third country. For instance, a refugee may decide to permanently return to his or her home country, with the support of UNHCR and that country, once the crisis that prompted his or her flight ends. Alternatively, refugees unable to safely repatriate to their home country may in certain cases, with the help of UNHCR, acquire permanent legal status with rights such as citizenship in their country of asylum, which UNHCR reports approximately 1.1 million refugees have done over the past decade. However, if neither repatriation nor local integration is available or appropriate, UNHCR may consider submitting a refugee’s case to a third country for resettlement consideration. UNHCR refers refugees for resettlement consideration to various countries, including the United States, Australia, Canada, Denmark, and the United Kingdom, among others. Refugees can be referred for resettlement consideration by UNHCR only if they meet UNHCR’s preconditions for resettlement consideration and fall under one or more of the resettlement submission categories. To assess refugees’ eligibility for resettlement referrals, UNHCR officials conduct interviews with refugees to obtain their basic biographic information, assess evidence of past or feared persecution, and determine eligibility for all other solutions that might be available to them. The organization documents this information in a resettlement referral form. While most registered refugees receive some form of protection assistance from the organization, less than 1 percent are referred for resettlement in a third country, according to UNHCR. State manages the USRAP admissions process in conjunction with DHS, and both agencies work with other government agencies, UNHCR, IOM, and various NGOs to process applications for refugees seeking resettlement to the United States. From fiscal year 2011 through June 2016, 61 percent of all refugee cases referred to the United States for resettlement consideration were referred by UNHCR, or roughly 405,000 referrals out of the 655,000 total. According to UNHCR, most of its referrals are families. UNHCR submits refugee referrals to the United States from all over the world. See table 1 for more information on the nationality of refugees referred by UNHCR for potential admission to the United States from fiscal year 2011 through June 2016. Once UNHCR refers a case to the United States for resettlement consideration, it is considered for access to USRAP. State accepts UNHCR referrals via UNHCR’s ProGres database and uploads biographic information to WRAPS, which is State’s own system that serves as a repository of application information and tracks the status of all individual refugee applications to USRAP. The resettlement application process continues at one of the nine RSCs. Through grants or voluntary contributions from State, various NGOs and IOM operate eight of the nine RSCs. RSCs are responsible for compiling eligible applications. They collect supporting documentation, biographic information such as names and addresses, and family information from each applicant. They are also responsible for prescreening USRAP applicants—that is, conducting in-person interviews with each applicant—during which staff employed by the organization that manages the RSC collect information on the applicants’ persecution story and why they claim to be unable to return home to their country of origin. RSCs then provide all of this information to USCIS officers. In addition, RSCs initiate the necessary biographic security checks for USRAP applicants, coordinate medical exams with panel physicians, and provide cultural orientation for refugees approved to travel to the United States, as well as manage the provision of interpretation services for USCIS interviews. See table 2 for more information on RSCs. As part of the resettlement application process, USCIS officers travel overseas to conduct in-person interviews of USRAP applicants and adjudicate their applications for refugee status pending the results of required security and background checks. RSCs provide interpreters to USCIS officers during the interviews, as necessary. Program integrity describes the extent to which the resettlement referral process is free from fraud, waste, and abuse by both staff and applicants. An important aspect of ensuring program integrity is designing, implementing, and evaluating the efficacy of antifraud measures. According to GAO’s Standards for Internal Control in the Federal Government, fraud, such as malfeasance conducted by staff, poses a significant risk to the integrity of a program. Accordingly, management should consider the potential for fraud when identifying, analyzing, and responding to program risks. Management responds to identified fraud risks by developing antifraud activities designed to reduce or eliminate the potential for fraud. Antifraud activities are a critical component for ensuring the integrity of a program such as USRAP. Proactive fraud risk management helps to facilitate a program’s mission and ensure that program services achieve their intended purpose. State and UNHCR have worked together on several measures designed to ensure integrity in the resettlement referral process. The organizations have developed a Framework for Cooperation to guide their partnership, emphasizing measures such as oversight activities and risk management. Additionally, UNHCR has developed SOPs and identity management systems to combat the risk of fraud and worked with State to implement these activities in the resettlement process. Since 2000, State and UNHCR have outlined their formal partnership using a Framework for Cooperation. State and UNHCR signed the most recent framework document in 2016, covering the period of March 14, 2016 to December 31, 2017. According to State and UNHCR officials, the organizations work together on the activities listed in the Framework for Cooperation to achieve mutual goals. Specifically, the framework emphasizes improved accountability at UNHCR through effective oversight measures, close cooperation with State, and organization-wide risk management. The Framework for Cooperation notes that State will work to ensure that UNHCR allocates sufficient resources to fully implement measures to provide oversight and accountability. UNHCR has several offices that are responsible for overseeing antifraud activities, in addition to providing audit services, investigating instances of fraud, and conducting broad reviews of country-level operations. The United Nations Office of Internal Oversight Services and the Board of Auditors conduct regular reviews of UNHCR and audit its financial statements, respectively. Both make recommendations regarding the management of UNHCR and track the status of those recommendations to help ensure effective management. For example, the board’s annual reports track previous recommendations, many of which focus on ensuring financial accountability, conducting fraud risk assessments, and establishing regular performance reporting mechanisms, among other things. UNHCR also has an Inspector General’s office that investigates allegations of staff misconduct and assigns responsibility to its headquarters, Nairobi, or Bangkok staff to conduct investigations. In 2015, the Inspector General’s Investigative Office opened 88 investigations, including 21 investigations related to fraud complaints. Of those 21 fraud investigations, 7 were related to the refugee status determination or resettlement processes. According to UNHCR officials, fraud committed by persons of concern is investigated locally, and local management can open an investigation and decide sanctions if applicant fraud is established. In addition, according to UNHCR officials, between 2014 and 2016, the organization sent as many as 10 teams to conduct reviews of field office operations, including registration, protection, and resettlement. Although the composition of these teams varies depending on the field operation being visited, they usually include officials responsible for reviewing registration, RSD, resettlement, interactions with the regional bureau, and other things. According to UNHCR officials, these visits have resulted in strengthening refugee protection and resettlement operations. For example, in response to alleged instances of fraud at UNHCR’s activities in an Asian country, the organization undertook two visits in 2016 to investigate and respond. First, UNHCR’s Inspector General’s office visited to investigate potential staff fraud but determined that while certain procedures may not have been followed, staff fraud was not established. Later, UNHCR sent a team to review how the operations there could be strengthened throughout the registration, protection, and resettlement processes. According to UNHCR officials, this visit resulted in improvements throughout operations in the country, especially related to the provision of assistance to urban populations of refugees. The Framework for Cooperation also describes regular coordination and communication between State and UNHCR as an important principle in the relationship between the two organizations. Specifically, at the headquarters level, the U.S. Mission in Geneva, Switzerland, has a humanitarian affairs office that, according to State officials, coordinates with UNHCR on a regular basis. For example, State reported that it works with UNHCR to review draft policy and procedures. It also works with UNHCR and other countries to help organize annual conferences on resettlement issues, which include working groups on integrity. The Framework for Cooperation also discusses UNHCR’s efforts to improve accountability and monitoring. It notes that UNHCR has committed to implementing Board of Auditors recommendations, including implementing an organization-wide approach to risk management, an enhanced framework for implementation with partners, and improved management of oversight over implementing partners. In addition, UNHCR has established committees on oversight and internal compliance, which have helped in developing an accountability matrix and monitoring progress made toward the implementation of oversight recommendations. UNHCR has also developed an organization-wide risk management strategy, known as enterprise risk management, across its programs to assess risk in the resettlement referral process, thus addressing a recommendation made by the Board of Auditors. UNHCR has developed guidance documents, baseline SOPs, and identity management programs that it notes are meant to help ensure the integrity of their operations, including the refugee resettlement referral process. For example, UNHCR has developed guidance on registration, RSD, resettlement, and other activities. The Handbook for Registration lays out the policies and methodology for registration, while the Resettlement Handbook provides guidance on the conditions for resettlement, the types of resettlement submission categories, and the procedures for making referrals to resettlement countries. UNHCR headquarters also issued baseline SOPs on resettlement, which provide a template for local field offices to complete and adapt for local situations. The resettlement SOPs vary by country and refugee population but, according to UNHCR officials, they adhere to these baseline requirements. Despite the complexity and regional variations in its refugee registration, refugee status determination, and resettlement referral processes, UNHCR officials said that standardizing procedures ensures that the organization has established basic antifraud practices worldwide. These officials added that they believe that SOPs are among the most important tools with which they ensure the integrity of the resettlement referral process. UNHCR officials in two of the field offices we visited indicated that changes to the baseline resettlement SOPs allowed for regional specificity. UNHCR officials also register refugees and manage their cases through ProGres, which is a registration and case management tool. According to UNHCR, registration and identity management are important ways to provide legal and physical protection, identify refugees at risk, provide population planning statistics, and facilitate implementation of durable solutions. UNHCR developed ProGres in 2003; according to the organization, it contained 7.2 million records and is used in 97 countries as of July 2016. UNHCR primarily uses BIMS and IrisGuard to collect and maintain biometric information, such as iris scans and fingerprints, and runs them in parallel depending on the geographic region and population. According to UNHCR officials, both BIMS and IrisGuard are linked with ProGres, allowing biometric data collected on refugees to be associated with biographic information. BIMS contains over 1.1 million records from 16 countries on its central server, and UNHCR is currently expanding its operation to additional countries, according to UNHCR officials. On our visits to UNHCR field offices, we observed UNHCR officials registering and managing case files in ProGres and verifying biometric data in BIMS and IrisGuard. Using these systems, UNHCR officials said they can check to ensure that a refugee is not already registered or receiving assistance. See figure 2 for photographs of technology that UNHCR uses to register and verify refugee identities. UNHCR has worked with State on implementing some activities related to collaboration with its identity management systems. For instance, to help manage the identities of referred refugees, State and UNHCR developed a Memorandum of Understanding (MOU) regarding the sharing of some biometric information. According to a Letter of Understanding that accompanies the MOU, it provides a framework whereby data from UNHCR is shared with State, which allows for increased efficiency and accuracy in processing resettlement referrals to the United States. State and RSCs report instituting a number of activities to combat the risk of fraud committed by RSC staff. Many of these activities correspond with leading practices identified in GAO’s Fraud Risk Framework. For instance, State and RSCs have taken steps to commit to an organizational culture and structure to help manage staff fraud risks. Further, State and RSCs have designed and implemented several specific control activities to mitigate staff fraud risks and taken steps to monitor staff fraud risk management activities. However, State could take additional steps to improve the implementation of existing controls, assess the risks of staff fraud, and examine the suitability of existing activities to control it. Further details on RSCs’ reported compliance with some measures contained in State’s Program Integrity Guidelines, challenges faced in compliance, and actions taken by specific RSCs to assess risk are provided in the sensitive report that we issued in June 2017. According to State officials, staff fraud at RSCs occurs infrequently, but instances of staff fraud have taken place in recent years, such as RSC staff soliciting bribes from applicants in exchange for promises of expediting applicants through RSC processing. State officials said that these events were uncovered before any significant consequences occurred; however, such instances illustrate the risks to the integrity of RSC operations. State and management from six of the nine RSCs stated that they could not recall any instances of staff fraud occurring at their RSCs. However, State and managers from the other three RSCs reported instances of staff fraud or malfeasance in recent years, including the following: In 2013, an RSC reported a significant case of staff fraud, resulting in the termination of two staff members. According to State and RSC officials, two RSC staff promised to expedite applicant cases in exchange for money. Although the staff were actually unable to influence the outcome of the applicants’ cases, the illusion of expediting the process in exchange for money allowed the extortion to take place. After an investigation by State’s local Regional Security Officer, the RSC terminated the two staff members and all individuals involved were arrested. In response, State undertook new antifraud initiatives, such as the creation of new antifraud guidelines for RSCs and commissioning an evaluation of risks posed by staff fraud. In 2014, while conducting interviews in the field, officials discovered three interpreters soliciting money from applicants, according to State officials. These officials said that the RSC identified the three interpreters and discovered that they had a record of misconduct with local police. The RSC terminated the interpreters and barred them from any future employment with the RSC. Since the incident, State officials said that the RSC has maintained a list of interpreters who are barred from providing services for the RSC. In 2016, another RSC discovered that a staff member connected a personal thumb drive to an RSC laptop without approval with the intention of accessing applicants’ files. The RSC reported the activity to State and the organization that manages the RSC. In coordination with State, RSC officials contracted with a private firm to conduct a forensic analysis of any potentially compromised information. The analysis determined that the staff member had attempted but failed to access any information and subsequently, the staff member was terminated. To address instances of fraud committed by staff at RSCs, State and RSCs report instituting a number of antifraud activities, many of which correspond with leading practices identified in GAO’s Fraud Risk Framework. GAO’s Fraud Risk Framework identifies leading antifraud practices to aid program managers in managing fraud risks that affect their program. The framework includes practices such as implementing activities that demonstrate an antifraud culture, designing and applying control activities to address fraud risks, monitoring the application of fraud controls, and conducting regular fraud risk assessments. We found that State and RSCs have taken steps to institute a number of these practices, but some gaps remain. By taking steps to promote organizational cultures and structures conducive to combatting staff fraud, State and RSCs have worked to demonstrate a commitment to managing staff fraud risks at RSCs. The Fraud Risk Framework identifies the involvement of all levels of an organization in setting an antifraud tone as a leading practice for fraud risk management. Management at every RSC said that all RSC staff had the responsibility to combat staff fraud. For example, management at all eight RSCs operated by IOM or NGOs reported that they had required their staff to review and sign their organization’s code of conduct annually. Furthermore, managers at all nine RSCs said that they had required their staff to attend annual antifraud training and reported to State that RSC staff had complied with these measures. Additionally, State and RSCs have created organizational structures to combat staff fraud by assigning specific staff the responsibility of overseeing staff fraud risk management activities, a leading practice highlighted in the Fraud Risk Framework. All nine RSCs stated that they had assigned staff fraud risk management responsibilities to specific staff members. Individual RSCs have varied in how they assign these responsibilities. For instance, while RSC Africa, RSC Middle East and North Africa, and RSC Turkey and Middle East reported having positions dedicated to leading their fraud risk management activities, RSCs Austria, Cuba, and East Asia have assigned overseeing fraud risk management activities as a duty of their respective RSC directors. The three remaining RSCs operated by IOM—including RSCs Eurasia, Latin America, and South Asia—stated that staff had been assigned to oversee fraud risk management responsibilities at each of them. Additionally, IOM maintains an ethics office located at its headquarters in Geneva, Switzerland, to provide additional staff fraud risk oversight. To help prevent or mitigate fraud committed by staff at RSCs, State and RSC officials said that they had established collaborative relationships with both internal and external partners to share information, which is consistent with another leading practice identified in the Fraud Risk Framework. For example, State reported that it had hosted an annual resettlement workshop, attended by RSC directors and UNHCR staff. State also reported that RSC staff attend region-specific meetings to share fraud risk management information. According to RSC Middle East and North Africa reporting, they have held similar fraud-focused quarterly meetings attended by representatives from State and UNHCR. In addition to attending organized conferences and meetings, management at RSCs stated that they had shared fraud-related information as it arose. When an RSC experiences an instance of staff fraud, State requires the RSC to report the fraud to State. According to RSC officials, depending on the RSC’s operating organization, the RSC may also report the staff fraud to its headquarters, inspector general, partner organizations, or ethics office. Another leading practice in the Fraud Risk Framework is the development of specific control activities to prevent and detect fraud. State officials identified two key guidance documents containing control activities: RSC SOPs and the Program Integrity Guidelines. First, according to State officials, State provides guidance to RSCs on developing SOPs that include staff fraud risk controls. For example, State requires RSCs to record the names of staff and interpreters who interact with applicants during prescreening interviews into WRAPS in order to mitigate fraud risk. According to State and RSC officials, although each RSC has used State’s guidance as a template, RSCs may incorporate additional procedures, including program integrity activities, based on their specific operational environment, such as size, complexity, location, or applicant population. Second, in response to the staff fraud incident in 2013 that resulted in the termination of two RSC staff, State developed and provided RSCs with a list of 87 measures designed to prevent and mitigate staff fraud at RSCs, known as the Program Integrity Guidelines. Of the 87 measures, State requires 72 and recommends the remaining 15. These measures include control activities addressing issues such as background checks, interpreter assignment, antifraud training, office layout, case file reviews, electronic data management, and reporting and responding to instances of suspected fraud. For example, State’s Program Integrity Guidelines have required RSCs to establish physical drop boxes or e-mail addresses to allow applicants to report instances of suspected staff fraud, as well as whistleblower policies for other staff to inform RSC management of suspected staff fraud. State has also required RSCs to include signage that indicates that the admissions process is free and instructions on how to report fraud. See figure 3 for examples of such RSC antifraud signage. Each RSC that we visited displayed similar signage in interview rooms, hallways, or applicant waiting areas. Consistent with another leading practice identified in the Fraud Risk Framework, State and RSCs also reported that they had implemented control activities designed to prevent and detect staff fraud; however, some gaps remain. State works with RSCs to implement the control activities identified in the Program Integrity Guidelines to mitigate staff fraud risks. RSCs report their compliance with the Program Integrity Guidelines to State via annual RSC Internal Malfeasance Prevention and Mitigation: Measures and Actions reports. For each measure listed in the Program Integrity Guidelines, RSCs report the actions they have taken to comply. State required RSCs to comply with the original Program Integrity Guidelines by October 2014. However, our review of the Measures and Actions reports found that RSCs reported complying with most, but not all, of the required measures applicable to their operations. Reported compliance with required, applicable measures at individual RSCs ranged from 86 percent to 100 percent. For 53 of the 72 measures, compliance was reported by all RSCs for which the measure was applicable. Though RSCs have reported complying with most of the controls required by the Program Integrity Guidelines, some RSCs have reported that they face challenges in fully implementing certain controls. State officials told us that they work to ensure that each RSC complies with all required controls in the Program Integrity Guidelines. If an RSC reports that it does not yet fully comply with a measure listed in the Program Integrity Guidelines, State expects the RSC to report its progress toward compliance to State. While this reporting assists State in its implementation efforts, gaps remain. Full compliance with these measures could help RSCs ensure the integrity of their operations and guard against staff fraud. State and the organizations that operate RSCs have taken steps to monitor their staff fraud risk management activities, a leading practice identified in the Fraud Risk Framework. For State’s monitoring of RSC antifraud activities, program officers and refugee coordinators have served as the primary liaison between their assigned region’s RSCs and State. According to State officials, its program officers have conducted monitoring of RSCs through frequent communication, program reports, and annual monitoring visits. State officials said that program officers have communicated with RSC management frequently via telephone and e-mail to conduct administrative functions, provide updates to State guidance, and address issues, including those related to staff fraud. Program officers also have reviewed program reports submitted by RSCs, which include a section that describes instances of suspected staff fraud from the previous quarter, if any, and updates to staff fraud risk management activities. For example, one RSC reported that, as an antifraud measure, it had prohibited staff from using their personal smartphones at worksites and issued staff cellphones without cameras or Internet capability. State also assigns to local U.S. embassies refugee coordinators who monitor RSC staff fraud risk management activities through frequent interaction with RSC staff. According to State officials, during visits to RSCs, refugee coordinators have provided additional monitoring of RSC staff fraud risk management activities by checking compliance with State’s Program Integrity Guidelines and receiving notification of and addressing reported instances of staff fraud. Additionally, IOM and NGOs have reported conducting annual monitoring visits of the RSCs that they operate. State has required the operating organization of each RSC to “conduct annual monitoring that includes fraud vulnerabilities” and submit the results of the monitoring visits to State. All four RSCs operated by IOM and all four RSCs operated by various NGOs reported that their respective operating organizations conducted such monitoring visits. For example, when one operating organization conducted a monitoring visit to an RSC in 2015, it recommended that the RSC should program its computers to lock after 5 minutes of inactivity, as required by the Program Integrity Guidelines. According to State officials, State program officers are also expected to conduct annual monitoring visits and create monitoring reports to check RSC compliance with State’s RSC SOPs and Program Integrity Guidelines. According to these monitoring reports, program officers observe day-to-day operations at RSCs during the monitoring visits. For instance, program officers report that they have observed RSC caseworkers conducting prescreening interviews of applicants. Upon concluding the annual monitoring visits, program officers are to complete written monitoring reports including a section that assesses RSC compliance with the Program Integrity Guidelines and makes recommendations to mitigate vulnerabilities for staff fraud. For example, the completed monitoring report for one RSC recommended upgrading the locks for its file library to an electronic system enabled by iris or fingerprint scan as a step to mitigate vulnerabilities of staff fraud. Further, program officers also have administered questionnaires to RSC directors and caseworkers to gather feedback on RSC procedures. In these questionnaires, State has asked RSC directors to comment on RSC procedures concerning hiring and training new staff. During the period of our review, State provided us with the most recent monitoring reports for each of the RSCs that had completed one. State has taken some steps to assess the risks posed by staff fraud to RSC operations. For example, in 2015, a contractor hired by State completed a report assessing (1) areas of vulnerability to staff fraud at RSCs, (2) current measures to address vulnerabilities and their effectiveness, (3) important factors in preventing staff fraud, and (4) optimization of State monitoring of RSCs. The report made a number of recommendations regarding potential staff fraud risks. Although State has taken steps to assess staff fraud risks, not all RSCs have conducted staff fraud risk assessments that follow leading practices identified in the Fraud Risk Framework, including (1) conducting assessments at regular intervals or when the program experiences changes, (2) tailoring assessments to the program and its operations, and (3) examining the suitability of existing fraud controls. State officials told us that not all RSCs had conducted staff fraud risk assessments because State’s Program Integrity Guidelines recommend but do not require these assessments. Without State requiring RSCs to conduct regular staff fraud risk assessments tailored to their specific operations, staff fraud risk assessments conducted by individual RSCs have varied. While officials from six of the nine RSCs stated that they had completed some form of staff fraud risk assessment, officials from four of them stated that they had done so only once. Additionally, only two of the nine RSCs have conducted staff fraud risk assessments specifically tailored to their operations. Further, State and most RSCs have not examined the suitability of existing fraud controls, another recommended leading practice in the Fraud Risk Framework. For example, while one RSC has regularly assessed the suitability of its existing staff fraud controls by conducting regular staff fraud risk assessments that examine the likelihood and impact of potential fraudulent activity and related fraud controls, the remaining eight RSCs have not done so. State officials told us that because State does not require RSCs to conduct risk assessments, information needed to assess the suitability of existing controls is not available from all RSCs. GAO’s Standards for Internal Control in the Federal Government states that changes in conditions affecting an entity and its environment often require changes to the entity’s internal control system, as existing controls may not be effective for addressing risk under changed conditions. For instance, a study of USRAP conducted at State’s request notes that as the number of refugees has increased in recent years, the potential for staff fraud committed against refugees has increased as well. According to this report, refugees may become more susceptible to participating in acts of staff fraud as they become more desperate to reach another country. As the number of refugees accepted varies each year by RSC, internal control systems may need to be changed to respond to the potential increased fraud risk. Moreover, as described earlier, individual RSCs face challenges complying with some of the existing fraud controls outlined in the Program Integrity Guidelines. Examining the suitability of these controls could help managers identify areas where existing control activities are not suitably designed or implemented to reduce risks to a tolerable level. Based on this analysis, managers could prioritize and target areas of residual risk. Without requiring RSCs to conduct regular staff fraud risk assessments that are tailored to their specific operating environments and reviewing these assessments to examine the suitability of existing fraud controls, State may lack necessary information about staff fraud risks and therefore not have reasonable assurance that existing controls effectively reduce these risks. Information from such risk assessments could help State and RSCs revise existing controls or develop new controls to mitigate the staff fraud risks faced by the program, if necessary. Each year, the United States resettles tens of thousands of refugees from around the world as part of its humanitarian commitment in the international community. The refugee admissions process relies on RSC staff to coordinate and manage refugee applications. Accordingly, staff fraud can undermine the integrity of the program. To reduce these risks, State and RSCs have instituted several antifraud activities, many of which correspond with leading antifraud practices. One of these activities is designing and implementing antifraud controls. For instance, State has required that RSCs comply with 72 staff fraud control measures. While RSCs have complied with most of these measures, persistent gaps remain. Pursuing efforts to ensure RSC compliance with these controls is essential to reducing the risks of staff fraud. Additionally, some RSCs have not conducted regular risk assessments tailored to their operations or examined the suitability of existing fraud controls. Without these assessments, State and RSCs may not be able to identify the staff fraud risks affecting their programs, fully assess the risks associated with noncompliance with staff fraud control measures, or evaluate the effectiveness of their control activities. In conjunction with antifraud controls already put in place by State and RSCs, additional steps could strengthen existing controls, assess future staff fraud risks to the program, and better support the integrity of USRAP. 1. To support efforts to reduce staff fraud at RSCs, the Secretary of State should direct the Bureau of Population, Refugees, and Migration to actively pursue efforts to ensure that RSCs comply with required, applicable measures in the Program Integrity Guidelines. 2. To better identify risks from RSC staff fraud, the Secretary of State should direct the Bureau of Population, Refugees, and Migration to update guidance, such as the Program Integrity Guidelines, to require each RSC to conduct regular staff fraud risk assessments that are tailored to each RSC’s specific operations. 3. To help ensure that control activities are designed to mitigate identified RSC staff fraud risks, the Secretary of State should direct the Bureau of Population, Refugees, and Migration to regularly review RSC staff fraud risk assessments and use them to examine the suitability of existing staff fraud controls and revise controls as appropriate. We provided a draft of the sensitive version of this report to the Departments of State and Homeland Security for review and comment. State provided written comments that are reprinted in appendix I. State, DHS, and UNHCR also provided technical comments, which we have incorporated as appropriate. State deemed some of the information in their original agency comment letter pertaining to RSCs’ reported compliance with the Program Integrity Guidelines to be sensitive, which must be protected from public disclosure. Therefore, they have redacted the sensitive information in the department’s comment letter, which is included in appendix I. These redactions did not have a material effect on the substance of the department’s comments. State concurred with our recommendations and agreed that implementing these activities could reduce the risk of staff fraud at RSCs. State noted that it has developed new guidance to enhance the monitoring of RSCs, which outlines roles, responsibilities, and tools for program officers and refugee coordinators. We are sending copies of this report to the appropriate congressional committees, the Secretaries of State and Homeland Security, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. Should you or your staff have questions about this report, please contact me at (202) 512-9601 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Thomas Melito, (202) 512-9601, or [email protected]. In addition to the contact named above, Elizabeth Repko (Assistant Director), Brian Hackney (Analyst-in-Charge), Ashley Alley, Kathryn Bernet, Anthony Costulas, Rebecca Gambler, Cynthia Grant, Paul Hobart, Mona Nichols Blake, Michael McKemey, Mary Pitts, Sean Sannwaldt, and Su Jin Yon made significant contributions to this report. Debbie Chung, Martin De Alteriis, Neil Doherty, Mark Dowling, Thomas Lombardi, Erin McLaughlin, and Mary Moutsos provided technical assistance.
According to UNHCR, as of the end of 2015, more than 21 million people had become refugees. In fiscal year 2016, the United States admitted nearly 85,000 refugees. State manages the U.S. refugee admissions program (USRAP). UNHCR referred 61 percent of the refugees considered by the United States for resettlement from October 2011 to June 2016, and State worked with staff hired by nine RSCs to process their applications. Deterring and detecting fraud is essential to ensuring the integrity of USRAP. GAO examined (1) how State works with UNHCR to ensure program integrity in the UNHCR resettlement referral process and (2) the extent to which State and RSCs follow leading practices to reduce the risk of fraud committed by RSC staff. GAO analyzed State and UNHCR data and documents; interviewed relevant officials; conducted fieldwork at UNHCR offices in Denmark, El Salvador, Jordan, Kenya, and Switzerland; interviewed senior officials from all nine RSCs; and visited RSCs in Austria, Jordan, Kenya, and a suboffice in El Salvador. This report is based on GAO-17-446SU , with certain sensitive information removed. The Department of State (State) and the United Nations High Commissioner for Refugees (UNHCR) have worked together on several measures designed to ensure integrity in the resettlement referral process. State and UNHCR have established a Framework for Cooperation to guide their partnership, emphasizing measures such as effective oversight structures, close coordination, and risk management. Working with State, UNHCR has implemented standard operating procedures and other guidance that, according to UNHCR officials, provide baseline requirements throughout the referral process. UNHCR also uses databases to help verify the identities of and manage information about refugees. These systems store biographic information such as names, personal histories, and the types of persecution refugees experienced in their home countries. They also maintain biometric information, such as iris scans and fingerprints. To reduce the risk of fraud committed by staff at the nine Resettlement Support Centers (RSC) worldwide, State and RSCs have instituted several antifraud activities, many of which correspond with leading antifraud practices, but key gaps remain. Overseen by State, the organizations that operate RSCs hire staff to process and prescreen applicants who have been referred for resettlement consideration. According to State and RSC officials, RSCs have experienced staff fraud. Officials said that instances of staff fraud are uncommon, but they illustrate risks to the integrity of RSC operations. To manage these risks, State and RSCs have established a number of activities consistent with leading antifraud practices. For example, officials from all nine RSCs stated that they assign staff fraud risk management responsibilities to designated individuals. State has also worked with RSCs to develop and implement controls to ensure program integrity. However, RSCs face challenges implementing some of these controls. Additionally, State has not required RSCs to conduct regular staff fraud risk assessments tailored to each RSC or examined the suitability of related control activities. Without taking additional steps to address these issues, State and RSCs may face challenges in identifying new staff fraud risks or gaps in the program's internal control system as well as designing and implementing new control activities to mitigate them. To better assess and manage risks of fraud committed by staff at RSCs, State should actively pursue efforts to ensure RSCs comply with program integrity measures; require each RSC to conduct regular risk assessments tailored to its operations; and use these assessments to design, implement, and revise control activities to mitigate risks of staff fraud. State agreed with GAO's recommendations.
SSA’s Disability Insurance and Supplemental Security Income programs are the nation’s largest providers of federal income assistance to disabled individuals, with the agency making payments of approximately $113 billion to more than 14 million beneficiaries and their families in 2004. Yet, over the years, it has become more challenging for the agency to ensure an acceptable level of service—in terms of both the quality and the timeliness of its support to these individuals. In January 2003, we designated disability benefits programs across the federal government as high risk—in need of urgent attention and transformation. The process through which SSA approves or denies disability benefits is complex and involves multiple partners at both the federal and state levels in determining a claimant’s eligibility. SSA’s 1,300 field offices are the initial points of contact for individuals applying for benefits. SSA also depends on 54 state DDS offices to provide crucial support to the claims process through their role in determining an individual’s medical eligibility for disability benefits. DDSs make initial determinations regarding disability claims in accordance with federal regulations and policies; the federal government reimburses 100 percent of all costs incurred by states to make disability determinations. Physicians and other members of the medical community provide the DDSs with medical evidence to help them evaluate disability claims. When disability claims have been denied by the DDSs, claimants can appeal to SSA’s OHA. The process begins when individuals apply for disability benefits at an SSA field office, where determinations are made about whether they meet nonmedical criteria for eligibility. If the claimant is eligible, the field office forwards the application to the appropriate state DDS, where a disability examiner collects the necessary medical evidence to make the initial determination of whether the claimant’s condition meets the definition of disability. Once the claimant’s medical eligibility is determined, the DDS returns the claim folder to SSA for final processing. A claimant who is initially denied benefits can ask the DDS to reconsider its determination. If the DDS denies the claim again, the claimant can request a hearing before a federal administrative law judge at an SSA hearings office and, if still dissatisfied, can request a review of the claim by SSA’s Appeals Council. Upon exhausting these administrative remedies, the claimant may file a complaint in federal district court. Each level of appeal involves multistep procedures for collecting evidence, reviewing information, and making the decision. Many individuals who appeal the initial determination on their claims will wait a year or longer—perhaps up to 3 years—for a final decision. To address concerns regarding the program’s efficiency, in 1992, SSA initiated its Modernized Disability System project, intending to redesign the disability claims process emphasizing the use of automation to achieve an electronic (paperless) processing capability. This project, which in 1994 was renamed the Reengineered Disability System, was to automate the entire disability claims process—from the initial claims intake in the field office to the gathering and evaluation of medical evidence by the state DDSs, to payment by the field office or processing center. The system also was intended to automate the handling of appeals by SSA’s hearings offices. However, as our prior work noted, SSA encountered performance and other problems during its initial pilot testing of the system and, after spending more than $71 million, suspended this project in 1999. In August 2000, SSA renewed its commitment to developing an electronic disability system by the end of 2005. The agency worked on this initiative through the spring of 2002, at which time the Commissioner of Social Security announced an accelerated electronic disability initiative— AeDib—to more quickly move to an automated process. Under the accelerated strategy, the agency planned to begin implementing its electronic disability system by January 2004. SSA anticipated that the electronic disability system would enable the disability offices to achieve processing efficiencies, improve data completeness, reduce keying errors, and save time and money. With technologically enhanced claims processing offices, the agency projected that it could realize benefits of more than $1 billion—at an estimated cost of approximately $900 million—over the 10- year life of the initiative. SSA reported actual AeDib costs of approximately $215 million through fiscal year 2004 for planning, hardware and software acquisition, maintenance, and personnel. The AeDib strategy focuses on developing the capability to electronically process claimant information and large volumes of medical images, files, and other documents that are currently maintained in paper folders. Stored in electronic folders, this information could then be accessed, viewed, and shared among the disability claims processing offices. The initiative to achieve this electronic capability involves five key projects: an Electronic Disability Collect System that would provide the capability for SSA field offices to capture electronically, in fixed data fields, information about a claimant’s disability that previously had been contained on paper disability forms (structured data) and to store it in databases for later use by the SSA and DDS offices responsible for processing disability claims; a Document Management Architecture to provide a data repository and scanning and imaging capabilities that would allow unstructured claimant and medical data, such as images or information not found in fixed data fields (e.g., a hospital report, doctors’ notes, or an x-ray report), to be stored, indexed, and shared among the disability processing offices; Internet applications to enable the public to submit disability claims and medical information to SSA via the Internet (all data keyed into the Internet applications would be transmitted directly into the Electronic Disability Collect System); a systems migration and electronic folder software interface to position DDS offices to operate on a common IBM-series hardware platform and enhance their existing claims systems to process the electronic claims information and to enable the DDS systems to access information in the electronic folder; and a Case Processing Management System that would interface with the electronic folder and enable OHA’s staff to track, manage, and complete case-related tasks electronically. According to SSA, the Electronic Disability Collect System and the Document Management Architecture are the two fundamental components needed to create the electronic disability folder. Via their claims processing systems, SSA and DDS users would be able to access and pull the structured and unstructured claimant data into appropriate computer screens, organized as electronic folders of information. The agency’s electronic disability claims processing system is depicted in figure 1. By mid-January 2004, SSA had implemented all planned releases of the Electronic Disability Collect System and had completed and placed into production Internet applications to aid claimants in filing online for disability benefits and services. It also had enhanced the DDSs’ claims processing systems by migrating and upgrading hardware to allow these offices to operate on a common IBM-series platform and by upgrading the claims processing software in all but 3 state DDS offices that used the standard disability claims processing systems. In addition, SSA had begun pilot testing OHA’s Case Processing Management System in a standalone environment at five sites. Further, the agency was pilot testing the Document Management Architecture in three state DDS locations—North Carolina, Illinois, and California. However, it had not yet implemented the Document Management Architecture repository and scanning and imaging capabilities and related DDS software enhancements or the software to enable DDS and OHA systems to interface with the electronic folders. SSA began its national rollout of these remaining system components at the Mississippi DDS on January 26, 2004. When we last reported on the initiative in late March 2004, SSA was proceeding with its implementation of its electronic disability system. However, our work had noted that the agency’s strategy for developing the system components involved risks that threatened the success of the project. For example, we determined that the agency (1) had begun the national rollout without conducting testing that was adequate to evaluate the performance of all system components collectively, (2) could not provide evidence that it was consistently applying established procedures to guide the AeDib software development or had developed risk mitigation strategies, (3) had not validated its analysis to ensure the reasonableness of estimated AeDib costs and benefits, and (4) had not articulated a comprehensive plan for ensuring that state DDSs’ concerns about the initiative were addressed. In view of the risks and the technological complexity, scope, and size of the initiative, we had recommended that the Commissioner of Social Security, before continuing with the national rollout of AeDib, ensure that all critical problems identified in pilot testing of the electronic disability system were resolved and that end-to-end testing of the interrelated systems was performed, ensure that the software that had been developed was approved and that the systems had been certified for production, establish a revised time frame for and expedite actions toward finalizing AeDib risk mitigation strategies, validate all AeDib cost and benefit estimates, and implement a communications plan to clearly and comprehensively convey SSA’s approach for effectively addressing disability stakeholders’ and users’ concerns and ensuring their full involvement in the AeDib initiative. SSA is proceeding with a national rollout of its electronic disability system and has generally met its schedule for implementing the remaining key components—the Document Management Architecture and the electronic folder interface software—that are required to process an entire disability case electronically. Nonetheless, the agency has considerable work to accomplish before it will be effectively positioned to fully process all disability claims in an electronic environment. Among the critical tasks that remain are certifying all state DDS offices and OHA sites to electronically process claims and addressing operational and other concerns that threaten to undermine the reliability and use of the system. Until SSA has effectively addressed these matters, it remains uncertain when and to what degree the agency will realize the full benefits of its electronic processing capability. The AeDib implementation schedule had called for all state DDSs and OHA sites to be equipped with the electronic disability claims processing capability by June 27, 2005, and October 3, 2005, respectively. Since beginning the national rollout of the Document Management Architecture and related DDS software enhancements and the electronic folder interface software in late January 2004, the agency has largely met its implementation schedule. As of late June 2005, SSA had fully or partially implemented the electronic disability system in 53 of the 54 state DDS offices and in 85 of the 144 OHA sites, as planned. The agency reported that it expected to finish implementing the electronic disability system in the one remaining DDS—New York—in October 2005. SSA officials attributed the 4-month delay in the planned implementation at the New York DDS to the need for additional time to interface the electronic disability system with that state’s existing claims processing capabilities. New York and Nebraska are the only two DDSs in which the states’ claims processing capabilities are not supported by the common hardware platform that the majority of DDSs use and that have developed and rely on disability claims processing software that is unique to their processing environments. As a result of New York’s efforts to develop and test an electronic disability claims process, it had achieved a level of electronic processing, including the capability to scan medical evidence into its system, prior to SSA’s completion of the electronic disability system. SSA and New York DDS officials agreed to interface the new electronic disability system with portions of that state’s existing claims processing capabilities. In addition, SSA officials reported, as of early July 2005, that they expected to meet the scheduled completion date of October 3, 2005, for 115 of the 144 OHA sites. They stated that the agency expected to complete implementation of the electronic disability system at the remaining 29 OHA sites approximately 1 month later, in November 2005. According to agency officials, 10 of the 29 sites support claims that are processed by the New York DDS. The agency delayed implementation at these sites in order to be more in step with New York’s revised implementation schedule and with anticipated time frames for when the DDS will be positioned to process disability cases using the electronic folder. Regarding the remaining 19 sites, officials explained that the agency did not wish to train staff and provide the electronic folder capability to OHA sites too far in advance of when these offices expected to receive electronic cases from the DDSs, believing that too much lag time between training and actual use of the system could result in the staffs’ losing some of the knowledge and skills they need to process cases electronically. Although the roll out of the electronic disability system is moving toward completion, the agency still has considerable work to accomplish before it will be effectively positioned to process all disability claims in a fully electronic environment. After implementing the electronic system, each DDS must undergo an assessment of the quality and accuracy of its electronic processing capabilities and must be certified by SSA to use the electronic folder as its official disability claims record. This assessment, referred to as the Independence Day Assessment, is intended to validate that an office is ready to process 100 percent of the initial disability claims and any reconsiderations that it receives in the electronic environment and that the electronic folder can serve as the official disability claims record. According to SSA Operation’s staff, the assessment involves examining the disability office’s operations and claims processing tasks to ensure that (1) the business process (e.g., the way in which the disability claims office is organized to do its work) and the electronic processing environment are compatible; (2) existing claims processing systems have the necessary functionality to process electronic folders; and (3) staff can, when using the electronic disability system, produce complete information that equals what is contained in the paper folders. As of early July 2005, SSA reported that only three state DDSs—Mississippi, Illinois, and Hawaii—had completed assessments and been certified to process all initial disability claims in a paperless electronic environment in which the electronic folder is recognized as the official disability claims record. SSA reported that it had certified Mississippi—the first state under the national rollout—to process all of its initial disability claims electronically by February 2005; the agency further reported that it had completed certifications in June 2005 for Illinois—one of the three states that had participated in a pilot test of the Document Management Architecture in 2003—and for Hawaii, one of the smallest states (consisting of 15 disability claims examiners), which completed its system’s implementation in February 2005. In discussing Mississippi’s certification, the DDS director stated that the approximately 1-year time frame between the implementation and the certification of the office’s system had been devoted to such tasks as updating and testing software versions in order to give the office the full complement of functionality that it would need to use the electronic folder and to ensuring that the office’s business processes effectively supported electronic processing by, for example, familiarizing staff with the electronic capabilities, training them in using the capabilities, and testing the use of scanning equipment in the office. The director added that, since Mississippi was the first state to be assessed and certified, SSA had reviewed a substantially larger number of disability cases (approximately 300) than it intends to review in the assessments of other states using the same claims processing software. SSA officials said that, following Mississippi’s certification, the Commissioner of Social Security had placed a moratorium on any additional certifications pending SSA’s review of the assessment that had been undertaken in that state. They said that the commissioner had wanted to capture lessons learned from Mississippi’s assessment to identify any needed improvements in the assessment process and to ensure that any business or user concerns about the assessments were resolved before they applied the process broadly across all DDSs and OHA sites. For example, according to the officials, they learned that a smaller sample of disability cases could be examined in subsequent offices using the same software as Mississippi’s without diminishing the integrity of the assessment and the related certification. As of July 2005, SSA officials told us that they had resumed the assessments and that the agency’s plans called for a total of seven state DDSs to be certified to process claims electronically by the end of fiscal year 2005. However, as indicated in table 1, not all of the 54 state DDSs are expected to be certified to process initial disability claims and to use the electronic folder as the official record until January 2007. According to OHA’s deputy director, SSA expects to certify each OHA site shortly after certifying the corresponding DDS office. The official noted that because all OHA sites will rely on the same standard system (the Case Processing Management System), their certification process is expected to be less complicated than the process for the DDSs. Until they are certified, offices that have already implemented the electronic disability system are expected to maintain paper folders as well as electronic ones for any initial disability cases that they process electronically. The paper folders will continue to serve as the official records for these cases. Even as the agency proceeds in certifying states’ electronic capabilities, however, operational concerns associated with the electronic disability system could undermine its reliability and use. Officials in seven of the nine DDS offices that we contacted (California, Delaware, Florida, Illinois, Mississippi, North Carolina, and South Carolina) stated that operational problems they had encountered while using the electronic disability system had affected its performance and raised doubts about its reliability in supporting their processing needs. DDS officials stated, for example, that as SSA had brought the system online at the different DDS offices and/or added new software to enhance functionality, their staffs had encountered various operating problems that affected the performance of the system. They stated that their offices had experienced problems such as computer screen freezes, system slowdowns, and system access issues—all of which had disrupted the offices’ processing of claims. They described these problems as unpredictable and random because they did not always occur consistently among all of the offices using the same claims processing software or even among examiners in a particular office. The officials added that although SSA has been able to resolve many of the problems that affected their ability to process claims, additional instances of screen freezes, system slowdowns, and access issues have continued to occur throughout the system’s implementation. According to the manager of the South Carolina DDS, which implemented the electronic disability system in March 2004, that office’s productivity had been adversely affected by system slowdowns that resulted from having inadequate network bandwidth to support its disability claims processing operations. In July 2005, the manager stated that SSA had recently made modifications to the office’s network architecture and had increased its bandwidth by installing two additional communications lines; at that time, the office was in the process of testing these enhancements. The manager said that all of the office’s disability claims examiners had begun processing all initial disability claims electronically and that, as a result of the enhancements, they expected the office’s claims processing efficiency and productivity to increase. In addition, DDS officials in six offices (California, Delaware, Florida, Mississippi, North Carolina, and South Carolina) reported problems with the electronic forms that SSA had installed to facilitate the processing of disability claims. The officials explained that, while using the electronic forms, disability claims examiners had experienced slow system response times or system freezes that had contributed to increased claims processing times. Officials in California stated that they had stopped using the electronic forms as a result of the problems that their staff had encountered and instead were continuing to rely on paper forms. In a February 2005 survey, the National Council of Disability Determination Directors found that 22 DDSs had experienced problems with the electronic forms and had reported that the slow pace involved in loading and using these forms was barely tolerable. Further, based on its May 2005 quarterly evaluation of the electronic disability system, SSA reported that one of the systems-related problems that disability processing offices identified most frequently was the slow response times and lack of user- friendliness of the electronic forms. Further, officials in five of the DDSs that we contacted among those that had implemented the electronic disability system (California, Illinois, Mississippi, North Carolina, and South Carolina) stated that their disability examiners faced difficulties in reading medical evidence on screen and performing certain case development or adjudication tasks because the size of the computer monitors that they had been provided to process medical evidence had proven inadequate. Users of the system—including managers, claims examiners, and medical consultants—reported difficulty with simultaneously viewing two documents on their monitors; some staff reported that they had resorted to printing out or toggling between documents to avoid using the split screen to review them. As a result of the inadequacies associated with using the existing monitors, the users reported that they needed longer periods of time to perform certain claims adjudication tasks and that they had been unable to complete as many cases per day as they could before they had the electronic disability system. Beyond these concerns, officials in four of the DDSs that we contacted told us that, although their electronic capability had been implemented, they had not been provided certain software enhancements that they needed to fully process a claim electronically and that were critical to improving the efficiency of their offices’ claims processing capabilities. Specifically, Florida officials stated that their claims processing software did not provide the capability to electronically notify staff of the actions that were required to process a claim. As a result, the staff had to expend additional time notifying each other of actions needed to process the claim by, for example, sending e-mail notices. In addition, the director of the North Carolina DDS stated that that office lacked the necessary software to enable staff to electronically send claims files to SSA’s Disability Quality Branch, which is responsible for conducting quality reviews of the accuracy of DDSs’ disability determinations. The director added that staff could not yet respond electronically to SSA components that requested information on a particular claim that had been processed in their office. Further, in Nebraska, the DDS manager stated that that office was in need of additional software modifications to provide the functionality required to electronically import all required documents into the electronic folder. Finally, the manager of the California DDS stated that his office had lacked the capability to electronically process nonmedical claims data that were required to be included in the disability claims folders, such as a claimant’s work history. The manager stated that they also had lacked the functionality to electronically refer claims to medical consultants for consultative examinations. In light of the operational and other concerns that have been encountered in using the electronic disability system, coupled with factors such as having to concurrently maintain paper and electronic claims folders while awaiting certification, both SSA and DDS managers acknowledged that the DDS offices had exercised wide discretion in their use of the new system. The President of the National Council of Disability Determination Directors stated that two key factors had affected some DDSs’ decisions about ramping up to full use of the system: (1) concerns about a drop in productivity in a fully electronic environment and (2) the instability of the electronic disability processing environment, particularly in terms of system performance and software reliability. In this regard, officials in eight DDS offices that we contacted—all of which had implemented the electronic disability system by the end of June—reported varying levels of usage, as shown in table 2. Regarding the extent of their usage, managers in the Mississippi and Illinois DDSs acknowledged that problems had been encountered following their implementation of the electronic disability system but stated that they had nonetheless chosen to expedite efforts to achieve full electronic processing of disability claims in an attempt to minimize the inefficiencies associated with having to maintain both paper and electronic disability folders. Further, to help bring their states to full electronic processing, the managers of the Mississippi and Illinois DDSs stated that they had expended additional resources on overtime pay to disability examiners and on additional support from the software vendor to alleviate and/or establish workarounds for the operational problems that their examiners had encountered. While Mississippi officials said that they were unable to provide a dollar amount for their overtime usage, an Illinois DDS official provided documentation indicating that between September 2003 and May 2005, that office had spent over $2 million on overtime, which assisted them in processing disability claims in the electronic environment. However, DDS managers in several other offices that we contacted stated that, as a result of the problems with and the resulting unpredictability of the electronic disability system, they had been reluctant to bring the system to full use. They expressed reluctance to increase their use of the system until a more reliable level of performance has been sustained, stating generally that the current problems with the system could hamper their ability to maintain their productivity levels. For example, officials in the California DDS told us that the electronic disability system had not been used to fully process any of the approximately 450,000 initial claims that the office had received since it had implemented the system in October 2003. The officials stated that they had chosen not to ramp up the system until it proved to be more stable and all critical processing capabilities had been delivered. The manager believed that trying to use the system to process all of the office’s initial disability claims before the problems affecting their system’s operations were resolved and before all critical processing capabilities were delivered would prevent the office from maintaining its productivity levels. In mid- July 2005, California DDS officials stated that the vendor supporting their disability claims processing system had recently provided a software enhancement that gave them the capability to fully process claims and that the agency would begin a phased increase in the number of initial disability claims it would process in the electronic environment. According to the manager, each examiner would initially be assigned one disability case per week to process electronically. In addition, in Florida, DDS officials stated that they had limited the number of claims being processed by their disability examiners until SSA was able to enhance their software to achieve better user efficiency. As a result, at the time of our review, only 109 of the office’s 487 disability examiners were using the electronic system to process cases, and only about 4 to 5 percent of initial disability claims were being processed electronically. Further, while the Nebraska DDS’s electronic capabilities were implemented in late June 2005, officials in that office stated that they did not plan to ramp up their use of the system until about September 2005. They explained that, lacking the software modifications required to electronically import all documents into the electronic folder, the office was reluctant to increase its use of the electronic disability system. They stated that doing so would require them to commit additional resources to scan documents that could not automatically be entered into the electronic system. The officials added that they expect to receive additional software modifications that they need to improve the efficiency of the office’s electronic processing capability in the September 2005 time frame. Given the current status of the electronic disability system, neither SSA nor the state offices had yet been able to effectively assess or quantify benefits resulting from its use. All of the managers of the DDS offices that we contacted stated that they saw the potential for realizing substantial claims processing improvements from using the system; nonetheless, these managers—including the managers of the Mississippi and Illinois DDSs— stated that it was too early to determine whether and to what extent the electronic disability system would contribute to processing improvements in their offices. In their view, the system had not yet reached a level of maturity where it was feasible to quantify the benefits of its use, due in large measure to factors such as (1) the learning curve associated with using the system; (2) current inefficiencies involved in having to maintain paper folders until an office is certified to electronically process claims; (3) certain DDSs’ decisions to not fully utilize the system until further problem resolution; and (4) certain offices’ use of additional resources, such as overtime and temporary hires, to support their processing of claims following the system’s implementation. The managers added that processing claims electronically had thus far taken longer and consumed more resources than before the electronic system was implemented. In addition, because of ongoing system implementation in the OHA sites, along with the normal processing delays associated with bringing disability claims to the appeals stage, these sites had not yet accrued enough experience in using the electronic folder to make a reasonable assessment of processing improvements. In speaking to the concerns that were raised about the reliability and use of the electronic disability system, SSA officials acknowledged the problems that had been identified by the DDSs, and that as a result, current use of the system among those offices varied considerably. However, these officials said they believed that the majority of the DDS offices would be able to bring their systems to full use with only a minimum of complications; they viewed California’s concerns, in particular, as not having been representative of other states’ experiences. Nonetheless, the officials said that the agency had initiated a number of measures to address the problems that had been encountered in using the system. For example, they stated that the agency had established a new help desk to more readily support those offices experiencing specific hardware and software problems while using the electronic disability system. In addition, they stated that SSA had assembled a work group to examine the DDSs’ use of the electronic forms, with the intent of determining whether a more suitable commercial-off-the-shelf product was available that could address the problems currently being encountered with these forms. Regarding the size of the computer monitors that disability examiners use, the officials stated that the agency planned to conduct a pilot test to address concerns with and identify a solution for ensuring that users have the monitors they need. SSA’s actions to address the outstanding concerns with its electronic disability system represent a positive step toward achieving success in the use of the new system. However, as of July 2005, the agency did not have an overall strategy—articulating milestones, resources, and priorities—to guide its efforts in efficiently and effectively resolving the operational problems and system limitations being experienced with the electronic disability system. For example, although the agency had established a work group to explore options for resolving problems being encountered with the electronic forms, it had not yet established plans and a time frame for completing actions to address this concern. In addition, although the agency planned to conduct a pilot test of computer monitors, it did not yet have essential information to determine what type of equipment would best meet the needs of the electronic disability system users or the resources and time that it would need to devote to resolving this matter. Adequately resolving the concerns with and gaining full acceptance and use of its electronic capabilities will be essential to SSA’s achieving a more efficient means of delivering disability benefits payments to its increasing beneficiary population. In addition to ensuring the immediate availability of the electronic disability claims processing system by preventing operational problems that could impact its performance and use, it is essential that SSA and the DDSs have plans for mutually ensuring the continuity of this vital disability benefits service in emergency situations. Federal law and guidance require that agencies develop plans for dealing with emergency situations involving maintaining services and protecting vital assets that could result from disruptions, such as localized shutdowns due to severe weather conditions, building-level emergencies, or terrorist attacks. Moreover, this guidance notes, a key element in developing a viable continuity capability is identifying interdependencies among agencies that support the performance of essential functions and ensuring the development of complementary continuity of operations plans by those agencies that provide information or data integral to the delivery of essential functions. Such planning would include developing and documenting procedures for continued performance of essential functions, identifying alternates to fill key positions in an emergency and delegating decision-making authority, and identifying vital electronic and paper records—along with measures for ensuring their protection and availability. However, SSA and the DDSs currently lack continuity of operations plans to ensure that the DDS offices could continue to process disability claims in the event of a short- or long-term disruption to the electronic disability system. A September 2004 report, issued by SSA’s Acting Inspector General, noted that the agency’s existing continuity of operations plan did not address the information or the electronic disability claims processing systems managed by the DDSs. The report further noted that, in relying heavily on the DDSs, SSA would lack certainty about the availability of information from these offices in the event of a disaster. Based on its findings, the Acting Inspector General recommended that SSA implement a complete and coordinated continuity of operations plan for the agency. Officials in the nine DDSs that we contacted further stated that their offices had not developed continuity of operations plans covering the electronic disability claims processing capabilities; yet, in discussing this matter, officials considered such plans to be vital to successfully ensuring the continued processing of disability claims. Officials in the Mississippi DDS stressed, for example, that in the event of a disruption to their system’s communications with SSA’s headquarters computer facilities, disability examiners would be unable to access the Document Management Architecture repository, send or receive faxes via the electronic system, or access the electronic forms they needed to support their work. In addition, they stated that medical examiners would be unable to perform tasks in support of disability determinations. In discussing this matter, SSA officials acknowledged that their existing plan had not addressed the electronic claims processing functions of the DDSs. They stated that the agency had recently initiated actions to help resolve this limitation by having a contractor develop a business continuity planning strategy. According to the officials, the contractor began work on this strategy in May 2005 and is expected to deliver an initial report in September 2005. However, the officials did not articulate the agency’s specific plans or a time frame for ensuring that its continuity of operations plan addresses the electronic claims processing functions of the DDS offices or for ensuring that these offices develop and implement complementary plans for continuing essential functions to support the disability claims process in an emergency situation. As SSA moves toward full implementation and use of the electronic disability system, the capability to continue essential electronic disability claims processing functions in any emergency or situation that may disrupt normal operations becomes increasingly important. In view of the fact that three states have already begun using electronic folders as official disability claims records, it is imperative that both SSA and the DDSs have plans that address the state systems’ interdependencies with the electronic disability claims processing components and that include preparations for continuing to provide critical claims processing services in the event of a disaster. Without continuity of operations plans, SSA will lack assurance that it is positioned to successfully sustain the essential delivery of disability benefits during unforeseen circumstances. As discussed earlier, in reporting on this initiative in March 2004, we recommended that the agency take measures to reduce the risks associated with its electronic disability strategy before continuing with its national rollout of this capability. These recommendations called for the agency to (1) resolve critical problems that it had identified in pilot testing of the electronic disability system and conduct end-to-end testing of the interrelated system components, (2) ensure that users approved the software being developed and that systems were certified for production, (3) finalize AeDib risk mitigation strategies, (4) validate AeDib cost and benefit estimates, and (5) improve communications with and effectively address the concerns of disability stakeholders and users involved in the initiative. In proceeding with the implementation of its electronic disability system, SSA has taken actions related to three of the five recommendations. Specifically, SSA officials provided evidence indicating that the agency has taken measures to ensure that users approve new software and that it certifies its systems for production. For instance, we reviewed agency documentation reflecting disability system users’ approval of new software and SSA’s certification of over 50 cases where software was put into production from February 2004 (shortly after the national rollout of the electronic disability system began) through October 2004. By continuing to validate its software and certify its systems, SSA should be able to better ensure that its systems are ready for production and will be acceptable to their end users. In addition, regarding our recommendation that it validate AeDib cost and benefit estimates, SSA has initiated studies, including quarterly evaluations of the initiative, that could help it assess the electronic disability system’s performance, costs, and processing times. The agency also has plans for conducting post-implementation reviews of the system, which include comparing baseline and current information to evaluate the system’s impact on performance, productivity, and cost—measures that if implemented fully and effectively could help validate AeDib’s costs and benefits. Further, although the agency has not implemented a communications plan, DDS officials, including the President of the National Council of Disability Determination Directors, told us that SSA had improved its communications with these offices and had made progress in including DDS officials in AeDib decision making. Such action reflects a positive move toward involving stakeholders in the agency’s efforts. Nonetheless, we continue to advocate the importance of having a clear and comprehensive plan for communicating with stakeholders to sustain vital user acceptance and achieve full use of the electronic disability system. However, SSA did not demonstrate any actions on two of the recommendations. As we previously noted, SSA did not take steps to resolve all of the critical problems that had been identified during pilot testing of the Document Management Architecture or to conduct end-to- end testing of the interrelated electronic disability system components before continuing with the national rollout of this system. Resolving all critical problems and conducting end-to-end testing of the interrelated system components prior to their implementation could have limited the problems that SSA and the DDSs have encountered with the electronic disability system’s operation. In the absence of such testing, as SSA moves to achieve certification and full use of the system, it will be essential that the agency work diligently to identify and alleviate the problems that could impact the successful outcome of this technically complex initiative. Further, while our earlier work noted that the agency had identified the risks associated with the AeDib initiative and the related automation projects, SSA has not provided any evidence that it has yet completed risk mitigation strategies for these projects. Best practices and federal guidance advocate risk management—including mitigation strategies—to reduce risks and achieve schedule and performance goals. Among the high-level risks identified, SSA noted that the overall availability of the Document Management Architecture might not meet service-level commitments to its users. Because the DDSs could not effectively perform their work if the data repository or document scanning and imaging capabilities provided by the architecture were not available, it is critical that SSA have mitigation strategies in place to reduce this risk and to help ensure that the DDSs can meet their performance goals. In the continued absence of risk mitigation strategies, the agency lacks a critical means of ensuring that it can prevent circumstances that could impede a successful project outcome. SSA is relying on its electronic disability system to play a vital role in improving service delivery to disabled individuals under its disability programs, and the agency has made considerable progress in implementing this system. However, even as the agency moves closer to achieving full systems implementation, important work still needs to be accomplished to ensure the system’s success. Among the agency’s critical tasks will be certifying that all of the SSA and state DDS offices are prepared to process all disability claims electronically. Yet, a number of DDSs have concerns about the operations and reliability of the electronic disability system, noting, for example, inadequacies in electronic forms and the computer monitors used to view claims information, as well as limitations in electronic processing capabilities—factors that they say have slowed system performance and impeded their productivity, and that have resulted in the levels of system usage varying among the DDSs. Further, as the agency moves to complete the system’s implementation, it will be essential that SSA and the DDSs have plans for mutually ensuring the continuity of this vital disability benefits service in emergency situations. The absence of a defined strategy or plans for ensuring that the electronic disability system will operate and will meet users’ needs as intended could threaten the continued progress and success of this initiative and make it uncertain when the agency will realize the full benefits of the AeDib initiative. To further reduce the risks to SSA’s progress in successfully achieving its electronic disability claims processing capability, we recommend that the Commissioner of Social Security take the following two actions: develop and implement a strategy that articulates milestones, resources, and priorities for efficiently and effectively resolving problems with the electronic disability system’s operations, including (1) identifying and implementing a solution to improve the use of electronic forms, (2) identifying and implementing a solution to address concerns with existing computer monitors, and (3) ensuring that the DDSs have the necessary software capabilities to fully and efficiently process initial claims in the electronic processing environment; and ensure that the state DDSs develop and implement continuity of operations plans that complement SSA’s plans for continuing essential disability claims processing functions in any emergency or other situation that may disrupt normal operations. In written comments on a draft of this report, the Commissioner of Social Security expressed concern about our references to the agency’s testing of its electronic disability system and offered additional views regarding our discussion of the state disability determination services’ use of overtime to assist in electronically processing disability claims. In addition, the agency disagreed with one of our recommendations and agreed with the other. Regarding the testing of its electronic disability system, SSA questioned why our report had concluded—months after the agency’s rollout of the system—that performing end-to-end testing of the interrelated system components was still critical to the initiative’s success. The agency believed that we should delete references to the criticality of such testing for this initiative so as not to lend confusion to and cast doubt on its rollout experience. In discussing SSA’s decision not to conduct end-to-end testing before rolling out the electronic disability system, our report responds to one of the two stated objectives of our study—to determine the actions that SSA has taken in response to our prior recommendations on the AeDib initiative. In doing so, we did not conclude that it remains critical for SSA to perform end-to-end testing at this stage in the system’s implementation. Rather, in speaking to this issue, we described the agency’s response to our prior recommendation that it conduct end-to-end testing before proceeding with the national roll out, and we emphasized the importance of such testing as a means of limiting the types of problems that DDS officials told us they have encountered with the system’s operation. As our report stresses, in the absence of end-to-end testing, it is essential that SSA remain diligent in identifying and alleviating the problems that could impact the successful outcome of the AeDib initiative as the agency moves to achieve full certification and use of the electronic disability system. Concerning the DDSs’ reported cost and use of overtime, SSA emphasized its belief that a number of factors other than the electronic disability system had contributed to the Illinois DDS’s increased use of overtime. For example, the agency said that overtime had been used to compensate for the loss of DDS employees that had accepted the state’s offer of early retirement. Based on our discussions with Illinois DDS officials, we understand that the office may not have used overtime solely to support its electronic processing of disability claims. However, as noted in our report, the Illinois officials told us that they had relied on overtime to assist in bringing their office to full electronic processing of disability claims using the new system. We have included language in the report to more clearly reflect this point. Beyond these points of discussion, SSA disagreed with our recommendation that they develop and implement a strategy that articulates milestones, resources, and priorities for efficiently and effectively resolving problems with the electronic disability system’s operations, including (1) identifying and implementing a solution to improve the use of electronic forms, (2) identifying and implementing a solution to address concerns with existing computer monitors, and (3) ensuring that the DDSs have the necessary software capabilities to fully and efficiently process initial claims in the electronic processing environment. Specifically, the agency stated that it had substantially improved its electronic forms and already has a strategic project plan to address residual issues concerning them. It noted that a work group established in January 2005 had examined this issue and that many key recommendations had been adopted to improve the performance of and customer satisfaction with the electronic forms. SSA added that it is using a contractor to examine alternatives and determine if more robust software is available to better meet users’ needs, and that it may incorporate new software into the electronic disability claims process. Our report recognized that SSA had established a work group to explore options for resolving problems with the electronic forms. However, during our study, SSA officials could not provide a time table for the work group’s efforts and, despite our inquiries, gave no indication that the agency had a defined strategy for addressing this area of concern. Further, SSA did not inform us of a specific contract to examine software alternatives or of the specific recommendations that have been made to correct problems with the forms and the improvements in performance and customer satisfaction that have been achieved. Thus, we did not have an opportunity to evaluate and comment on the agency’s actions in this regard. Given the DDSs’ expressed concerns about the electronic forms throughout the course of our study and as reflected in SSA’s own quarterly evaluation of the electronic disability system, we continue to stress the importance of SSA developing and implementing a strategy to guide its efforts toward efficiently and effectively resolving problems with this important electronic capability. Regarding its existing computer monitors, SSA stated that it already has a plan in place to evaluate this issue and potential ergonomic solutions. The agency stated that it had awarded a contract to conduct a controlled test of the impact of various monitor configurations on ergonomics and productivity for all primary users of the electronic disability system, and that a final report is due in January 2007. It added that when final decisions are made regarding the appropriate monitor requirements, a business plan will be deployed if warranted. As noted in our report, SSA officials did inform us of their intent to conduct a pilot test to identify potential solutions for ensuring that users have appropriate monitors. However, the officials could not provide contractual and other pertinent documents explaining the pilot study and did not inform us of any plan that the agency had developed to guide this effort. Given that SSA does not anticipate its final report on its test of the monitor configurations until January 2007, and does not intend to consider the deployment of a business plan until final decisions are made regarding the monitor requirements, we believe that the agency could benefit from a strategy articulating clear milestones, resources, and priorities to guide its efforts toward finalizing decisions on its computer monitors and ensure that all users’ concerns are fully and effectively addressed. Further, regarding the DDSs’ software capabilities, SSA stated that it had established a help desk to provide support for specific hardware and software issues associated with the electronic disability system, and that it had no information that there are outstanding issues concerning DDS software support. Moreover, the agency stated that our report had inaccurately described supposed issues related to implementing the electronic disability system in light of some states having not yet been certified to fully process cases electronically. For example, SSA commented that because Florida had not yet been certified to fully process cases electronically, it was premature to expect that the electronic disability system would notify staff of actions to process claims. Similarly, SSA stated that because Nebraska had not been certified to process cases electronically, it was premature to indicate that the state needed additional software modifications to electronically import all required documents into the electronic folder. Further, the agency said that it was inaccurate for us to report that the North Carolina DDS lacked the necessary software to enable staff to electronically send claim files to SSA’s Disability Quality Branch, since all but two states—New York and Nebraska—had this software. It added that the agency’s planning and electronic disability system roll out had addressed software capabilities and other issues impacting individual DDSs, negating the need for a formal strategy, as we have recommended. We disagree that our report inaccurately described issues related to implementing the electronic disability system or that it prematurely highlighted limitations in certain states’ use of the system. A primary aspect of our review involved examining SSA’s progress in rolling out the electronic disability system, including the state DDSs’ experiences in implementing and using the system. In this regard, DDS officials apprised us of their electronic disability claims processing capabilities, and in certain instances, of their need for additional software capabilities that they deemed essential to improving their offices’ processing efficiency and sustaining productivity in the electronic environment. As of mid-July 2005, North Carolina DDS officials told us that they lacked the necessary software to enable staff to electronically send claim files to SSA’s Disability Quality Branch. We recognize that SSA does not expect to complete all states’ certifications until early 2007. However, in our view, until the agency ensures that all DDSs have full electronic processing capabilities, it will not be positioned to effectively assess the extent to which the electronic disability system can contribute to a more efficient and effective disability claims process. SSA further stated that system availability issues had been addressed, as evidenced by its current fiscal year data on key electronic disability system components (e.g., the Electronic Disability Collect System, the Case Processing Management System, and the Document Management Architecture). However, our report does not convey that these key electronic disability system components were not available for use. Rather, the concerns discussed in our report pertain to the inefficiencies that DDS officials said they had encountered in using the electronic disability system. For example, DDS officials pointed to the continuing instances in which they had experienced processing slowdowns when using electronic forms, which ultimately had impeded their disability claims processing capability. Thus, we stand by our recommendation that SSA develop and implement a strategy to ensure that the DDSs have the necessary software capabilities to fully and efficiently process claims in the electronic environment. Regarding our second recommendation, the agency agreed to ensure that the state DDSs develop and implement continuity of operations plans that complement SSA’s plans for continuing essential disability claims processing functions during emergencies or other disruptions to normal operations. In this regard, the commissioner stated that SSA is highly committed to providing uninterrupted services to the public and had hired a contractor to develop business continuity plans for the DDSs that document how these offices would respond to long- and short-range disruptive events. The actions that SSA stated that it is taking should help improve the agency’s and the DDSs’ ability to ensure the continuity of vital disability benefits services in emergency situations. In addition to the aforementioned comments, SSA provided technical comments, which we have incorporated, as appropriate. Appendix II reproduces the agency’s comments on our draft report. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Commissioner of Social Security and the Director, Office of Management and Budget. Copies will also be available at no charge on our Web site at www.gao.gov. Should you have any questions on matters contained in this report, please contact me at (202) 512-6240. I can also be reached by email at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. Our objectives were to (1) assess the current status of SSA’s accelerated implementation of its electronic disability system—the initiative known as AeDib and (2) identify actions the agency has taken in response to our prior recommendations on this initiative. To assess the agency’s status in implementing its electronic disability system, we analyzed relevant project management documentation including schedules, project plans, and reports documenting the status of the system’s rollout to the 54 state disability determination service (DDS) offices and SSA’s 144 Office of Hearings and Appeals (OHA) sites. In addition, we reviewed technical documentation, such as software project scope agreements and software development plans, to assess the development, implementation, and operation of the electronic disability system. We also reviewed system release certifications to ensure that systems had been validated and certified. To identify issues that arose during the AeDib implementation process, we reviewed problems reported by the DDSs via SSA’s Change Asset Problem Reporting System. We also reviewed the results of the National Council of Disability Determination Directors’ February 2005 survey of its member DDS offices on their experiences in implementing the electronic disability system, as a means of identifying any problems and issues that the states had encountered. In addition, we reviewed reports on the system’s implementation, performance, and capacity that had been prepared by the Council and the DDSs. We supplemented our analysis with interviews of SSA officials in the Offices of Operations, Systems, Disability and Income Security Programs, and Hearings and Appeals. We also interviewed DDS officials in nine states: California, Delaware, Florida, Illinois, Mississippi, Nebraska, New York, North Carolina, and South Carolina. In addition, we interviewed the President of the National Council of Disability Determination Directors, an organization that represents the DDSs. Our selection of the nine states was based on the following criteria: The Mississippi DDS was the first state to which the electronic disability system was rolled out, as well as the first state to achieve total electronic processing of initial disability cases. The California, Illinois, and North Carolina DDSs had participated in initial pilot tests of the electronic processing system, which had included assessing use of the Document Management Architecture. The Florida and South Carolina DDSs were states that received the electronic disability system early in the implementation schedule. The New York and Nebraska DDSs posed potential unique challenges as the only two “independent” states, in which their existing claims processing capabilities were not supported by the common hardware platform that the majority of DDSs used and that had developed and were relying on disability claims processing software that was unique to their processing environments. The Delaware DDS was managed by the President of the National Council of Disability Determination Directors. We conducted site visits at two DDSs—Mississippi and South Carolina—to observe the electronic processing system in operation, and at OHA sites in these same states to discuss their experiences in implementing and using the electronic folder and their preparation for receiving appeals of initial claims that had been processed electronically in the respective state DDS offices. To determine what actions SSA had taken toward implementing our prior recommendations on the electronic disability system, we obtained and reviewed software project scope agreements, software development plans, user validation and system certification plans, and AeDib component security risk assessment documentation. We also interviewed agency officials regarding the status of their actions on each of the recommendations made in our March 2004 report on AeDib. In addition, we discussed SSA’s efforts to improve communications on the initiative’s implementation with DDS officials in each of the offices that we contacted and with the President of the National Council of Disability Determination Directors. We conducted our work at SSA’s headquarters in Baltimore, Maryland, and at selected DDS and OHA offices in Jackson, Mississippi, and Columbia, South Carolina, from October 2004 to July 2005, in accordance with generally accepted government auditing standards. In addition to the individual named above, Valerie C. Melvin, Assistant Director; Michael A. Alexander; J. Michael Resser; and Eric L. Trout made significant contributions to this report. Neil J. Doherty and Joanne Fiorino also contributed to the report.
Through an initiative known as AeDib, the Social Security Administration (SSA) is implementing a system in which medical images and other documents that have traditionally been kept in paper folders will be stored in electronic folders, enabling disability offices--including SSA's 144 Office of Hearings and Appeals sites and 54 state disability determination services--to process disability claims electronically. This initiative supports a program that, in 2004, made payments of approximately $113 billion to more than 14 million beneficiaries and their families. In March 2004, GAO recommended that SSA take steps to ensure the successful implementation of the electronic disability system. GAO was asked to assess SSA's status in implementing AeDib and the actions the agency has taken in response to GAO's prior recommendations on this initiative. Since January 2004, SSA has been implementing its electronic disability system at 53 state disability determination services and 85 Office of Hearings and Appeals sites. It plans to complete implementation in all state sites by October 2005 and all hearings and appeals sites by November 2005. Nonetheless, considerable work is needed before these entities will be ready to process all initial claims electronically. SSA's effort to certify all state offices to electronically process claims and maintain the electronic folder as an official claims record is not expected to be completed until January 2007. In addition, state disability officials expressed concerns about the system's operations and reliability and about limitations in their electronic processing capabilities. Accordingly, a number of the offices reported varying levels of system usage, and their officials said that processing claims electronically generally took longer and consumed more resources than the previous method. Further, SSA and the state disability determination services lacked continuity of operations plans for ensuring that states could continue to process disability claims during emergencies. As SSA has implemented its system, it has taken actions that supported three of GAO's five prior recommendations. It has initiated studies that could help validate AeDib planning assumptions, costs, and benefits. It has also approved new software and certified its systems for production. In addition, according to state disability officials, the agency had improved its communications with them. However, SSA did not demonstrate action on two recommendations calling for thorough testing of its interrelated system components before implementation and completion of risk mitigation strategies for the projects supporting the initiative. Thorough testing and risk mitigation strategies could have helped limit problems with the system's operation and other circumstances that could impede the project's success.
deductible as a business expense; this subsidy also is not considered taxable income for employees. In addition, tax benefits are available to individuals who purchase nongroup private insurance directly from insurers (referred to as “individual insurance”) if the person is self- employed or has premium and medical expenses combined that exceed 7.5 percent of his or her adjusted gross income. However, private insurance is not accessible to everyone. Some workers, including those working for small firms or in certain industries such as agriculture or construction, are less likely to be offered employment-based health coverage. Health insurance may also be expensive and potentially unaffordable for those paying the entire premium individually rather than receiving employment-based coverage where employers typically contribute to some or all of the cost. In addition, while all members of a group plan typically pay the same premium for employment-based insurance regardless of age or health status, in most states individual insurance premiums are higher for older, sicker individuals than for young, healthy individuals, potentially making them unaffordable. The Health Insurance Portability and Accountability Act of 1996 (HIPAA) provided several important protections to improve the availability of private health insurance, particularly for individuals changing jobs or with preexisting health conditions. HIPAA included guaranteed access to coverage for those leaving group coverage and for small employers; however, it did not address issues of affordability. In addition, many states have enacted reforms that guarantee access to health insurance for certain high-risk individuals and small groups and that sometimes limit the premiums these persons and groups pay. While these federal and state private insurance market reforms provide important protections for certain individuals and groups, recent research finds little, if any, effect from these reforms on overall private insurance coverage rates. higher eligibility standards as long as they are within federal guidelines. SCHIP was established in 1997 to give states the choice of receiving enhanced federal funding to cover additional low-income children who do not qualify for Medicaid, generally those in families whose incomes are up to 200 percent of the federal poverty level. Unlike Medicaid, SCHIP is not an entitlement program, and states can halt enrollment once budgeted funds are exhausted. As of September 2000, HCFA reported that 3.3 million children were enrolled in SCHIP. Although Medicare primarily insures most Americans 65 years or older, it also provides coverage for some nonelderly individuals who are disabled or have end-stage renal disease. Additional tax incentives proposed to encourage people to purchase health insurance vary in terms of who would be eligible, whether the tax incentive is provided to individuals or employers, and whether the incentive is a deduction that reduces taxable income or a credit that reduces total tax liability. The proposals share challenges that will affect their success in covering newly insured individuals. These challenges include (1) making the reduction in premiums large enough to induce uninsured persons to purchase health insurance or to encourage employers to offer coverage or increase their contributions to premiums, and (2) timing a subsidy to be available for low-income individuals at the time they pay their premiums, rather than after the end of the tax year. their taxable income—potentially important if the employee must pay most or a large share (more than half) of the plan’s premium, since these employees are more likely to turn down employment-based coverage. A tax deduction may be limited in its ability to induce uninsured individuals to purchase private insurance because most uninsured individuals do not earn enough for a deduction to make any or a significant difference in their net health insurance costs. In 1999, about 40 percent of the uninsured either did not file income tax returns or were in the 0 percent marginal tax rate and would not benefit from the deduction if they purchased individual insurance. Nearly 50 percent of the uninsured were in the 15 percent marginal tax rate, which, if they purchased qualifying health insurance, would allow them a 15 percent net reduction in their insurance cost. Analysts have generally agreed that this level of reduction would encourage few additional uninsured individuals to purchase health insurance. The remaining 10 percent of the uninsured, based on their marginal tax rates, would be eligible for a 28 to nearly 40 percent net reduction in the cost of their health insurance. While this level of reduction in net premiums may induce some individuals in higher tax brackets to purchase health insurance, it is less than some analysts have concluded would be necessary to lead to a widespread increase in coverage. For example, the Congressional Budget Office (CBO) reported that tax subsidies “would have to be fairly large—approaching the full cost of the premium—to induce a large proportion of the uninsured population to buy insurance.” higher-income individuals could be eligible for a partial credit or no credit. Because more than half of uninsured individuals would not have had enough income tax liabilities in 1999 to receive the full credit amount, some proposals would make the credit refundable so that more low- income tax filers and a number of those who would not otherwise file could receive a larger portion or all of the amount. The number of individuals eligible for a tax credit would vary depending on the income thresholds specified in a proposal. For example, we estimate that in 1999 22 million uninsured Americans were in families that potentially would have been eligible for a tax credit available to single tax filers with $30,000 in taxable income and joint or head-of-household tax filers with $50,000 in taxable income. A recent study estimated that a tax credit of $1,000 for single coverage and $2,000 for family coverage with these taxable income thresholds could enable about 4.2 million—or nearly 20 percent of eligible individuals—to become newly insured. If income eligibility levels were twice as high, we estimate that 3 million additional uninsured individuals would have been in families potentially eligible for the tax credit, and the study estimated that a credit at this higher income eligibility level would result in another 0.5 million newly insured. a high premium of $7,154 for a 60- to 64-year-old smoker in urban Illinois. Thus, in some states, a $1,000 tax credit could represent all or most of the premium for a young, healthy male or for someone purchasing a plan with a high deductible or limited benefits. On the other hand, a $1,000 credit could represent a small proportion of the premium for a comprehensive health plan for an older person or someone with existing health conditions. For many individuals, a $1,000 tax credit would likely represent less than half of a typical premium. A tax credit’s ability to induce uninsured individuals to purchase coverage will also depend on the timing of the credit. Some low-income individuals who want to take advantage of a credit to purchase health insurance may find it difficult to do so if they must pay the premiums up front but cannot receive the credit until the following year after filing their tax return. To alleviate this problem, some proposals would allow advance funding of a credit, so that eligible individuals could receive the credit at the time they purchase the health insurance. There is limited experience with advance payments of tax credits for individuals, and establishing an effective mechanism could be administratively challenging. Procedures and resources to assess eligibility based on partial-year income information would need to be available nationwide. In addition, efficient and equitable procedures for end-of-year reconciliations and recovery of excess payments would be necessary. insurance because they are required to spend money up-front to get the tax credit, whereas EITC is an addition to income, not a reimbursement for an expense. To encourage more employers to offer coverage, some proposals would provide a tax subsidy to small firms or those with low-wage workers that often do not offer health insurance to their employees. Although at least 96 percent of private establishments with 50 or more employees offered coverage in 1998, only 36 percent of private establishments with fewer than 10 workers and about 67 percent of private establishments with 10 to 25 workers offered coverage. Also in 1998, among private establishments in which half or more of the workers were low-wage, only 31 percent offered health insurance to their employees, while other private establishments were nearly twice as likely to offer health insurance. that in 1996 37 percent of workers earning less than $7 per hour were offered coverage but turned it down, while only 14 percent of workers earning $15 or more per hour turned down coverage. Many proposed or already available state-offered tax credits for employers provide only a temporary subsidy for the first few years an employer offers coverage. This may limit their potential for inducing employers to initiate and keep offering coverage. Experts we have consulted in our private insurance work told us that small employers are not likely to begin offering health insurance if they do not believe they will be able to do so permanently. Some proposed employer tax credits are linked to small employers obtaining health insurance through a purchasing cooperative. We reported last year that several existing cooperatives gave small employers the ability to offer a choice of plans, but typically at premiums similar to those available outside of the cooperative. We also reported that most current cooperatives represented a small share of their local small group market (5 percent or less) and several had recently been discontinued or faced declining insurer or employer participation. Some analysts suggest that small employer purchasing cooperatives could be more effective in making coverage more affordable if they represented a larger share of the market. A significant employer tax credit linked to a small employer purchasing cooperative might stimulate participation and create larger market share, making them better able to secure lower-cost coverage for participants. not currently eligible (such as childless adults) or raise income and asset eligibility standards. Another proposal would allow some near-elderly persons to buy in to Medicare. But many low-income people who currently are eligible for these public programs have not enrolled. Therefore, state outreach efforts to low-income individuals are key to the success of current and proposed programs. Despite mandatory and optional state Medicaid expansions and the implementation of SCHIP in recent years, millions of low-income children and adults remain uninsured. Nearly 3 million children in households below the federal poverty level were uninsured in 1999 even though they would typically have been eligible for Medicaid. And although SCHIP now covers more than 3 million children, in 1999 there were nearly 6 million uninsured children in families with incomes below 200 percent of the federal poverty level (about $34,000 for a family of four)—the income threshold targeted by many SCHIP programs. Another 16.3 million adults with family incomes below 200 percent of the federal poverty level were uninsured, and nearly half of these had family incomes below the federal poverty level. use SCHIP funds to cover eligible children’s parents—but few other states have sought to do so. Also, 30 states have expanded Medicaid eligibility under section 1931 of the Social Security Act to disregard portions of an applicant’s income or assets when determining eligibility, which effectively increases the level of income and assets an eligible individual may have. States’ willingness and ability to use additional federal flexibility will be key to efforts to expand public coverage. States with high uninsured rates typically have lower income eligibility thresholds for Medicaid than those with low uninsured rates. For example, the average Medicaid eligibility level for parents in the 13 states with high uninsured rates is 54 percent of the federal poverty level, compared with an average of 99 percent of the federal poverty level for the 29 states with low uninsured rates. Furthermore, states with low uninsured rates have been more likely to use available authority to expand coverage than states with high uninsured rates. Whereas 10 of the 29 states with uninsured rates significantly lower than the U.S. average have used section 1115 waivers to expand Medicaid eligibility, only 1 of the 13 states with uninsured rates significantly higher than the U.S. average has done so. Appendix I summarizes selected eligibility requirements and options that states have adopted for Medicaid and SCHIP. States’ financial capacity may be a factor in what states have done to expand Medicaid and SCHIP to cover additional low-income individuals. States with high uninsured rates tend to be poorer and already cover a larger share of their population in Medicaid. On average, 16 percent of the nonelderly populations in the 13 states with high uninsured rates are in poverty compared with 10 percent in the 29 states with low uninsured rates. These high uninsured states also cover a higher proportion of their nonelderly residents through Medicaid (9 percent) than do states with low uninsured rates (7 percent). sponsored retiree health benefits in 1997 than in 1991. Recent employer surveys indicate that this decline has not reversed since 1997. Further, with the aging of the baby boom generation, over the next decade the number of near-elderly individuals not yet eligible for Medicare will grow, which likely will increase the number of uninsured persons in this age group. CBO estimates that few individuals would be able to afford the full premium that would be necessary to buy-in to Medicare—$300 to more than $400 per month initially. High-cost individuals who would face higher than average premiums in the individual insurance market would be most likely to opt for a Medicare buy-in, which would likely lead to premium increases over time. Subsidies to low-income individuals would encourage more lower-cost near-elderly individuals to buy in to Medicare. Many low-income individuals who are eligible for Medicaid and SCHIP do not enroll. Some may be unaware that they or their children may be eligible, while the administrative complexity of enrolling and other reasons may discourage other eligible individuals from participating. Thus, outreach to low-income individuals to enroll in existing or expanded public programs is key to the success of the programs. We reported in 1996 that 3.4 million Medicaid-eligible children—23 percent of those eligible under federal standards—were uninsured. Another study found that in 1998 16 percent of children under 200 percent of the federal poverty level were eligible for Medicaid or SCHIP but were uninsured. for federal-state assistance for paying Medicare premiums and/or other out-of-pocket expenses not covered by Medicare were not enrolled.Recognizing the low participation by these individuals eligible for the Qualified and Specified Low-Income Medicare Beneficiary programs, last year the Congress enacted requirements that the Social Security Administration identify and notify potentially eligible individuals, and that the Department of Health and Human Services develop and distribute to states a simplified uniform enrollment application. Efforts to expand private or public coverage to those currently uninsured can also provide new incentives to those already having private health insurance. Some currently insured individuals may drop employment- based coverage to get tax-subsidized individual insurance or enroll in Medicaid or SCHIP. While there was disagreement among analysts about the extent of crowd-out of private health insurance resulting from the Medicaid expansions in the late 1980s and early 1990s, concern led the Congress to include a requirement in SCHIP that states devise methods to avoid such crowd-out. While several approaches may offset the extent of crowd-out, some degree of crowd-out may be an unavoidable cost of expanding private or public coverage to insure those that are currently uninsured. For example, CBO analysts suggested that some displacement of private insurance is inevitable, particularly since some low-income families move in and out of private insurance coverage and public programs can allow these low-income families to achieve more stable insurance coverage. federal cost per newly insured person since much of the subsidy goes to those already covered. Moreover, some employers currently offering health insurance to their employees may discontinue offering coverage if their employees have tax preferences available for individually-purchased insurance. Similarly, even if employers continued sponsoring coverage, some employees—especially those who are young and healthy—may be able to purchase lower-cost insurance in the individual market, which could over the long-term increase the costs for some remaining in the group employment-based market. One study estimated that, among people electing a tax credit, nearly half would already be purchasing individual insurance, about one-quarter would shift from employment- based coverage, and another one-quarter would have previously been uninsured. Of those shifting from employment-based coverage, about one- fourth would be because the firm dropped coverage. Similarly, when eligibility for public programs is expanded, employers with many low-income individuals eligible for public coverage may decide to discontinue coverage or individuals offered employment-based coverage may shift to public programs where they have lower or no premiums or other out-of-pocket costs. The absence of measures to reduce crowd-out can be significant. For example, a recent report indicated that one state that extended Medicaid coverage to parents with eligible children without a waiting period found that nearly one-third of those that became newly enrolled had previously had private health insurance. periods requiring individuals not to have had employment-based coverage for a certain time before becoming eligible for SCHIP. Other states have established cost sharing requirements (premiums or copayments) for SCHIP, thereby providing less of a financial incentive for low-income workers to switch from an employment-based plan where cost sharing requirements are common. A variety of approaches have been proposed to increase private and public coverage among uninsured individuals. The success of these proposals in doing so for these diverse populations will depend on several key factors. The impact of tax subsidies on promoting private health insurance will depend on whether the subsidies reduce premiums enough to induce uninsured low-income individuals to purchase health insurance and on whether these subsidies can be made available at the time the person needs to pay premiums. The effectiveness of public program expansions will depend on states’ ability and willingness to utilize any new flexibility to cover uninsured residents as well as develop effective outreach to enroll the targeted populations. While crowd-out is a concern with any of the approaches, private or public, some degree of public funds going to those currently with private health insurance may be inevitable to provide stable health coverage for some of the currently 42 million uninsured. Mr. Chairman, this concludes my statement. I would be happy to answer any questions that you or Members of the Committee may have. For more information regarding this testimony, please contact Kathryn G. Allen at (202) 512-7118 or John E. Dicken at (202) 512-7043. JoAnne R. Bailey, Paula Bonin, Randy DiRosa, Karen Doran, Betty Kirksey, Susanne Seagrave, and Mark Vinkenes also made key contributions to this statement. Medicaid upper income eligibility standard for parents, as of March 2000 (percentage of federal poverty level) SCHIP upper income eligibility standard, as of September 30, 2000 (percentage of federal poverty level) Medicaid upper income eligibility standard for parents, as of March 2000 (percentage of federal poverty level) SCHIP upper income eligibility standard, as of September 30, 2000 (percentage of federal poverty level) 140 Income eligibility level for parents assumes a family of three with one wage-earner, that all income is from earnings, and that only earned income disregards are taken. Health Insurance: Characteristics and Trends in the Uninsured Population (GAO-01-507T, Mar. 13, 2001). Federal Taxes: Information on Payroll Taxes and Earned Income Tax Credit Noncompliance (GAO-01-487T, Mar. 7, 2001). Private Health Insurance: Potential Tax Benefit of a Health Insurance Deduction Proposed in H.R. 2990 (GAO/HEHS-00-104R, Apr. 21, 2000). Medicaid and SCHIP: Comparisons of Outreach, Enrollment Practices, and Benefits(GAO/HEHS-00-86, Apr. 14, 2000). Private Health Insurance: Cooperatives Offer Small Employers Plan Choice and Market Prices (GAO/HEHS-00-49, Mar. 31, 2000). Private Health Insurance: Estimates of Effects of Health Insurance Tax Credits and Deductions as Proposed in H.R. 2261 (GAO/HEHS-99-188R, Sept. 13, 1999). Children’s Health Insurance Program: State Implementation Approaches Are Evolving (GAO/HEHS-99-65, May 14, 1999). Private Health Insurance: Progress and Challenges in Implementing 1996 Federal Standards (GAO/HEHS-99-100, May 12, 1999). Low-Income Medicare Beneficiaries: Further Outreach and Administrative Simplification Could Increase Enrollment (GAO/HEHS-99-61, Apr. 9, 1999). Private Health Insurance: Estimates of a Proposed Health Insurance Tax Credit for Those Who Buy Individual Health Insurance (GAO/HEHS-98- 221R, July 22, 1998). Private Health Insurance: Estimates of Expanded Tax Deductibility of Premiums for Individually Purchased Health Insurance (GAO/HEHS-98- 190R, June 10, 1998). Private Health Insurance: Declining Employer Coverage May Affect Access for 55- to 64-Year-Olds (GAO/HEHS-98-133, June 1, 1998). Medicaid: Demographics of Nonenrolled Children Suggest State Outreach Strategies (GAO/HEHS-98-93, Mar. 20, 1998). (290044)
Various approaches have been proposed to increase private and public health care coverage of uninsured persons. The success of these proposals will depend on several key factors. The impact of tax subsidies on promoting private health insurance will depend on whether the subsidies reduce premiums enough to induce uninsured low-income individuals to buy health insurance and on whether these subsidies can be made available at the time the person needs to pay premiums. The effectiveness of public program expansions will depend on states' ability and willingness to use any new flexibility to cover uninsured residents as well as develop effective outreach to enroll the targeted populations. Although crowd-out is a concern with any of the approaches, some degree of public funds going to those currently with private health insurance may be inevitable to provide stable health coverage for some of the 42 million uninsured Americans.
In the past, the ICC regulated almost all of the rates that railroads charged shippers. The Railroad Revitalization and Regulatory Reform Act of 1976 and the Staggers Rail Act of 1980 greatly increased reliance on competition to set rates in the railroad industry. Specifically, these acts allowed railroads and shippers to enter into confidential contracts that set rates and prohibited ICC from regulating rates where railroads had either effective competition or rates negotiated between the railroad and the shipper. Furthermore, the ICC Termination Act of 1995 abolished ICC and transferred its regulatory functions to STB. Taken together, these acts anchor the federal government’s role in the freight rail industry by establishing numerous goals for regulating the industry, including to allow, to the maximum extent possible, competition and demand for services to establish reasonable rates for transportation by rail; minimize the need for federal regulatory control over the rail transportation system and require fair and expeditious regulatory decisions when regulation is required; promote a safe and efficient rail transportation system by allowing rail carriers to earn adequate revenues, as determined by STB; ensure the development and continuation of a sound rail transportation system with effective competition among rail carriers and with other modes to meet the needs of the public and the national defense; foster sound economic conditions in transportation and ensure effective competition and coordination between rail carriers and other modes; maintain reasonable rates where there is an absence of effective competition and where rail rates provide revenues that exceed the amount necessary to maintain the rail system and attract capital; prohibit predatory pricing and practices to avoid undue concentrations of market power; and provide for the expeditious handling and resolution of all proceedings. While the Staggers Rail and ICC Termination Acts reduced regulation in the railroad industry, they maintained STB’s role as the economic regulator of the industry. The federal courts have upheld STB’s general powers to monitor the rail industry, including its ability to subpoena witnesses and records and to depose witnesses. In addition, STB can revisit its past decisions if it discovers a material error, or new evidence, or if circumstances have substantially changed. Two important components of the current regulatory structure for the railroad industry are the concepts of revenue adequacy and demand-based differential pricing. Congress established the concept of revenue adequacy as an indicator of the financial health of the industry. STB determines the revenue adequacy of a railroad by comparing the railroad’s return on investment with the industrywide cost of capital. For instance, if a railroad’s return on investment is greater than the industrywide cost of capital, STB determines that railroad to be revenue adequate. Historically, ICC and STB have rarely found railroads to be revenue adequate—a result that many observers relate to characteristics of the industry’s cost structure. Railroads incur large fixed costs to build and operate networks that jointly serve many different shippers. Some fixed costs can be attributed to serving particular shippers, and some costs vary with particular movements, but other costs are not attributable to particular shippers or movements. Nonetheless, a railroad must recover these costs if the railroad is to continue to provide service over the long run. To the extent that railroads have not been revenue adequate, they may not have been fully recovering these costs. The Staggers Rail Act recognized the need for railroads to use demand- based differential pricing to promote a healthy rail industry and enable it to raise sufficient revenues to operate, maintain and, if necessary, expand the system in a deregulated environment. Demand-based differential pricing, in theory, permits a railroad to recover its joint and common costs—those costs that exist no matter how many shipments are transported, such as the cost of maintaining track— across its entire traffic base by setting higher rates for traffic with fewer transportation alternatives than for traffic with more alternatives. Differential pricing recognizes that some customers may use rail if rates are low—and have other options if rail rates are too high or service is poor. Therefore, rail rates on these shipments generally cover the directly attributable (variable) costs, plus a relatively low contribution to fixed costs. In contrast, customers with little or no practical alternative to rail—”captive” shippers—generally pay a much larger portion of fixed costs. Moreover, even though a railroad might incur similar incremental costs while providing service to two different shippers that move similar volumes in similar car types traveling over similar distances, the railroad might charge the shippers different rates. Furthermore, if the railroad is able to offer lower rates to the shipper with more transportation alternatives, that shipper still pays some of the joint and common costs. By paying even a small part of total fixed cost, competitive traffic reduces the share of those costs that captive shippers would have to pay if the competitive traffic switched to truck or some other alternative. Consequently, while the shipper with fewer alternatives makes a greater contribution toward the railroad’s joint and common costs, the contribution is less than if the shipper with more alternatives did not ship via rail. The Staggers Rail Act further requires that the railroads’ need to obtain adequate revenues to be balanced with the rights of shippers to be free from, and to seek redress from, unreasonable rates. Railroads incur variable costs—that is, the costs of moving particular shipments—in providing service. The Staggers Rail Act stated that any rate that was found to be below 180 percent of a railroad’s variable cost for a particular shipment could not be challenged as unreasonable and authorized ICC, and later STB, to establish a rate relief process for shippers to challenge the reasonableness of a rate. STB may consider the reasonableness of a rate only if it finds that the carrier has market dominance over the traffic at issue—that is, if (1) the railroad’s revenue is equal to or above 180 percent of the railroad’s variable cost (R/VC) and (2) the railroad does not face effective competition from other rail carriers or other modes of transportation. Rail rates have generally declined since 1985, but experienced a 9 percent annual increase between 2004 and 2005—the largest annual increase in 20 years. Although rates have generally declined, railroads have also shifted other costs to shippers, such as the cost of rail car ownership, and have increased the revenue they report as miscellaneous more than 10-fold between 2000 and 2005. Following a period of general decline since 1985, rates began to increase in 2001. Rates experienced a 9 percent annual increase from 2004-2005, which represents the largest annual increase in rates during the 20-year period from 1985 through 2005. This annual increase also outpaced inflation—about 3 percent in 2005. However, despite these increases, rates for 2005 remain below their 1985 levels and below the rate of inflation for the 1985 through 2005 period, and rates overall have declined since 1985. Because the set of rail rate indexes we used to examine trends in rail rates over time does not account for inflation we also included the price index for the gross domestic product (GDP) in figure 1. Similar to overall industry trends, rates for individual commodities have increased from 2004-2005. In 2005, rates increased for all 13 commodities that we reviewed. Rates for coal increased by 7.9 percent while rates for grain increased by 8.5 percent. In 2005, the largest rate increase (for fireboard and paperboard) exceeded 11 percent, while the smallest increase (for motor vehicles) was about 2.7 percent. Figure 2 depicts rate changes for coal, grain, miscellaneous mixed shipments, and motor vehicles from 1985 through 2005. In 2005, freight railroad companies continued a trend of shifting other costs to shippers. Our analysis shows a 20 percentage point increase shift in railcar ownership (measured in tons carried) since 1987. In 1987, railcars owned by freight railroad companies moved 60 percent of tons carried. In 2005, they moved 40 percent of tons carried, meaning that freight railroad company railcars no longer carry the majority of tonnage (see fig. 3). In 2005 the amount of industry revenue reported as miscellaneous increased ten-fold over 2000 levels, rising from about $141 million to over $1.7 billion (see fig. 4). Miscellaneous revenue is a category in the Carload Waybill Sample for reporting revenue outside the standard rate structure. This miscellaneous revenue can include some fuel surcharges, as well as revenues such as those derived from congestion fees and railcar auctions (in which the highest bidder is guaranteed a number of railcars at a specified date). In 2004, miscellaneous revenue accounted for 1.5 percent of freight railroad revenue reported. In 2005, this percentage had risen to 3.7 percent. Also, in 2005, 20 percent of all tonnage moved in the United States generated miscellaneous revenue. In October 2006 and August 2007, we reported that captive shippers are difficult to identify and STB’s efforts to protect captive shippers have resulted in little effective relief for those shippers. We also reported that economists and shipper groups have proposed a number of alternatives to address remaining concerns about competition – however, each of these alternative approaches have costs and benefits and should be carefully considered to ensure the approach will achieve the important balance set out in the Staggers Act. It remains difficult to determine precisely how many shippers are “captive” to one railroad because the proxy measures that provide the best indication can overstate or understate captivity. One measure of potential captivity—traffic traveling at rates equal to or greater than 180 percent R/VC—is part of the statutory threshold for bringing a rate relief case before STB. STB regards traffic at or above this threshold as “potentially captive,” but, like other measures, R/VC levels can understate or overstate captivity. Since 1985, tonnage and revenue from traffic traveling at rates over 180 percent R/VC have generally declined, while traffic traveling at rates substantially over the threshold for rate relief (greater than 300 percent R/VC) has generally increased. This trend continued in 2005, as industry revenue generated by traffic traveling at rates over 180 percent R/VC dropped by roughly half a percent. Tonnage traveling at rates over 180 percent R/VC dropped by a smaller percentage. Traffic traveling at rates substantially over the threshold for rate relief has generally increased from 1985 to 2005 (see fig. 6). In 2003 and 2004, the percentage of both tonnage and revenue traveling at rates above 300 percent R/VC declined from the previous year, but each increased again in 2005. For example, the share of tonnage traveling at rates over 300 percent R/VC increased from 6.1 percent in 2004 to 6.4 percent in 2005. Figure 6 shows tonnage traveling at rates above 300 percent R/VC from 1985 through 2005. Some areas with access to one Class I railroad also have more than half of their traffic traveling at rates that exceed the statutory threshold for rate relief. For example, parts of New Mexico and Idaho with access to one Class I railroad had more than half of all traffic originating in those same areas traveling at rates over 180 percent R/VC. However, we also found instances in which an economic area may have access to two or more Class I railroads and still have more than 75 percent of its traffic traveling at rates over 180 percent R/VC, as well as other instances in which an economic area may have access to one Class I railroad and have less than 25 percent of its traffic traveling at rates over 180 percent R/VC. STB has taken a number of actions to provide relief for captive shippers. While the Staggers Rail and ICC Termination Acts encourage competition as the preferred way to protect shippers and to promote the financial health of the railroad industry, they also give STB the authority to adjudicate rate cases to resolve disputes between captive shippers and railroads upon receiving a complaint from a shipper; approve rail transactions, such as mergers, consolidations, acquisitions, and trackage rights; prescribe new regulations, such as rules for competitive access and merger approvals; and inquire into and report on rail industry practices, including obtaining information from railroads on its own initiative and holding hearings to inquire into areas of concern, such as competition. Under its adjudicatory authority, STB has developed standard rate case guidelines, under which captive shippers can challenge a rail rate and appeal to STB for rate relief. Under the standard rate relief process, STB assesses whether the railroad dominates the shipper’s transportation market and, if it finds market dominance, proceeds with further assessments to determine whether the actual rate the railroad charges the shipper is reasonable. STB requires that the shipper demonstrate how much an optimally efficient railroad would need to charge the shipper and construct a hypothetical, perfectly efficient railroad that would replace the shipper’s current carrier. As part of the rate relief process, both the railroad and the shipper have the opportunity to present their facts and views to STB, as well as to present new evidence. STB also created alternatives to the standard rate relief process, developing simplified guidelines, as Congress required, for cases in which the standard rate guidelines would be too costly or infeasible given the value of the cases. Under these simplified guidelines, captive shippers who believe that their rate is unreasonable can appeal to STB for rate relief, even if the value of the disputed traffic makes it too costly or infeasible to apply the standard guidelines. Despite STB’s efforts, we reported in 2006 that there was widespread agreement that STB’s standard rate relief process was inaccessible to most shippers and did not provide for expeditious handling and resolution of complaints. The process remained expensive, time consuming, and complex. Specifically, shippers we interviewed agreed that the process could cost approximately $3 million per litigant. In addition, shippers said that they do not use the process because it takes so long for STB to reach a decision. Lastly, shippers stated that the process is both time consuming and difficult because it calls for them to develop a hypothetical competing railroad to show what the rate should be and to demonstrate that the existing rate is unreasonable. We also reported that the simplified guidelines also had not effectively provided relief for captive shippers. Although these simplified guidelines had been in place since 1997, a rate case had not been decided under the process set out by the guidelines when we issued our report in 2006. STB had held public hearings in April 2003 and July 2004 to examine why shippers have not used the guidelines and to explore ways to improve them. At these hearings, numerous organizations provided comments to STB on measures that could clarify the simplified guidelines, but no action was taken. STB observed that parties urged changes to make the process more workable, but disagreed on what those changes should be. We reported that several shipper organizations told us that shippers were concerned about using the simplified guidelines because they believe the guidelines will be challenged in court, resulting in lengthy litigation. STB officials told us that they—not the shippers—would be responsible for defending the guidelines in court. STB officials also said that if a shipper won a small rate case, STB could order reparations to the shipper before the case was appealed to the courts. Since our report in October 2006, STB has taken steps to refine the rate relief process. Specifically, in October 2006, STB revised procedures for deciding large rate relief cases. By placing restraints on the evidence and arguments allowed in these cases, STB predicted that the expense and delay in resolving these rate disputes would be reduced substantially. In September 2007, STB altered its simplified guidelines for small shippers to enable shippers who are seeking up to $1 million in rate relief over a 5- year period to receive a STB decision within 8 months of filing a complaint. STB also created a new rate relief process for medium size shipments to allow shippers who are seeking up to $5 million in rate relief over a 5-year period to receive a STB decision within 17 months of filing a complaint. Additionally, STB also stated that all rail rate disputes would require nonbinding mediation. Shipper groups, economists, and other experts in the rail industry have suggested several alternative approaches as remedies that could provide more competitive options to shippers in areas of inadequate competition or excessive market power. These groups view these approaches as more effective than the rate relief process in promoting a greater reliance on competition to protect shippers against unreasonable rates. Some proposals would require legislative change, or a reopening of past STB decisions. These approaches each have potential costs and benefits. On the one hand, they could expand competitive options, reduce rail rates, and decrease the number of captive shippers as well as reduce the need for both federal regulation and a rate relief process. On the other hand, reductions in rail rates could affect railroad revenues and limit the railroads’ ability and potential willingness to invest in their infrastructure. In addition, some markets may not have the level of demand needed to support competition among railroads. It will be important for policymakers, in evaluating these alternative approaches, to carefully consider the impact of each approach on the balance set out in the Staggers Act. The targeted approaches frequently proposed by shipper groups and others include the following: Reciprocal switching: This approach would allow STB to require railroads serving shippers that are close to another railroad to transport cars of a competing railroad for a fee. The shippers would then have access to railroads that do not reach their facilities. This approach is similar to the mandatory interswitching in Canada, which enables a shipper to request a second railroad’s service if that second railroad is within approximately 18 miles. Some Class I railroads already interchange traffic using these agreements, but they oppose being required to do so. Under this approach, STB would oversee the pricing of switching agreements. This approach could also reduce the number of captive shippers by providing a competitive option to shippers with access to a proximate but previously inaccessible railroad and thereby reduce traffic eligible for the rate relief process (see fig. 7). Terminal agreements: This approach would require one railroad to grant access to its terminal facilities or tracks to another railroad, enabling both railroads to interchange traffic or gain access to traffic coming from shippers off the other railroad’s lines for a fee. Current regulation requires a shipper to demonstrate anticompetitive conduct by a railroad before STB will grant access to a terminal by a nonowning railroad unless there is an emergency or when a shipper can demonstrate poor service and a second railroad is willing and able to provide the service requested. This approach would require revisiting the current requirement that railroads or shippers demonstrate anticompetitive conduct in making a case to gain access to a railroad terminal in areas where there is inadequate competition. The approach would also make it easier for competing railroads to gain access to the terminal areas of other railroads and could increase competition between railroads. However, it could also reduce revenues to all railroads involved and adversely affect the financial condition of the rail industry. Also, shippers could benefit from increased competition but might see service decline (see fig. 8). to its tracks to another railroad, enabling railroads to interchange traffic beyond terminal facilities for a fee. In the past, STB has imposed conditions requiring that a merging railroad must grant another railroad trackage rights to preserve competition when a merger would reduce a shipper’s access to railroads from two to one. While this approach could potentially increase rail competition and decrease rail rates, it could also discourage owning railroads from maintaining the track or providing high- quality service, since the value of lost use of track may not be compensated by the user fee and may decrease return on investment (see fig. 9). “Bottleneck” rates: This approach would require a railroad to establish a rate, and thereby offer to provide service, for any two points on the railroad’s system where traffic originates, terminates, or can be interchanged. Some shippers have more than one railroad that serves them at their origin and/or destination points, but have at least one portion of a rail movement for which no alternative rail route is available. This portion is referred to as the “bottleneck segment.” STB’s decision that a railroad is not required to quote a rate for the bottleneck segment has been upheld in federal court. STB’s rationale was that statute and case law precluded it from requiring a railroad to provide service on a portion of its route when the railroad serves both the origin and destination points and provides a rate for such movement. STB requires a railroad to provide service for the bottleneck segment only if the shipper had prior arrangements or a contract for the remaining portion of the shipment route. On the one hand, requiring railroads to establish bottleneck rates would force short-distance routes on railroads when they served an entire route and could result in loss of business and potentially subject the bottleneck segment to a rate complaint. On the other hand, this approach would give shippers access to a second railroad, even if a single railroad was the only railroad that served the shipper at its origin and/or destination points, and could potentially reduce rates (see fig. 10). Paper barriers: This approach would prevent or, put a time limit on, paper barriers, which are contractual agreements that can occur when a Class I railroad either sells or leases long term some of its track to other railroads (typically a short-line railroad and/or regional railroad). These agreements stipulate that virtually all traffic that originates on that line must interchange with the Class I railroad that originally leased the tracks or pay a penalty. Since the 1980s, approximately 500 short lines have been created by Class I railroads selling a portion of their lines; however, the extent to which paper barriers are a standard practice is unknown because they are part of confidential contracts. When this type of agreement exists, it can inhibit smaller railroads that connect with or cross two or more Class I rail systems from providing rail customers access to competitive service. Eliminating paper barriers could affect the railroad industry’s overall capacity since Class I railroads may abandon lines instead of selling them to smaller railroads and thereby increase the cost of entering a market for a would-be competitor. In addition, an official from a railroad association told us that it is unclear if a federal agency could invalidate privately negotiated contracts (see fig. 11). STB has taken some actions to address our past recommendations, but it is too soon to determine the effect of these actions. In October 2006 we reported that the continued existence of pockets of potential captivity at a time when the railroads are, for the first time in decades, experiencing increasing economic health, raises the question whether rail rates in selected markets reflect justified and reasonable pricing practices, or an abuse of market power by the railroads. While our analysis provided an important first step, we noted that STB has the statutory authority and access to information to inquire into and report on railroad practices and to conduct a more rigorous analysis of competition in the freight rail industry. As a result, we recommended that the Board undertake a rigorous analysis of competitive markets to identify the state of competition nationwide and to determine in specific markets whether the inappropriate exercise of market power is occurring and, where appropriate, to consider the range of actions available to address such problems. STB initially disagreed with our recommendation because it believed the findings underlying the recommendation were inconclusive, their on-going efforts would address many of our concerns, and a rigorous analysis would divert resources from other efforts. However, in June 2007, STB stated that it intended to implement our recommendation using funding that was not available at the time of our October report to solicit proposals from analysts with no connection to the freight railroad industry or STB proceedings to conduct a rigorous analysis of competition in the freight railroad industry. On September 13, 2007, STB announced that it had awarded a contract for a comprehensive study on competition, capacity, and regulatory policy issues to be completed by the fall of 2008. We commend STB for taking this action. It will be important that these analysts have the ability that STB has through its statutory authority to inquire into railroad practices as well as sufficient access to information to determine whether rail rates in selected markets reflect justified and reasonable pricing practices, or an abuse of market power by the railroads. The Chairman of the STB has recently testified that these analysts would have that authority and access. We also recommended that STB review its method of data collection to ensure that all freight railroads are consistently and accurately reporting all revenues collected from shippers, including fuel surcharges and other costs not explicitly captured in all railroad rate structures. In January 2007, STB finalized rules that require railroads to ensure that fuel surcharges are based on factors directly affecting the amount of fuel consumed. In August 2007, STB finalized rules that require railroads to report their fuel costs and revenue from fuel surcharges. While these are positive steps, these rules did not address how surcharges are reported in the Carload Waybill Sample. In addition, STB has not taken steps to address collection and reporting of other miscellaneous revenues— revenues deriving from sources other than fuel surcharges. As stated earlier, STB has also taken steps to refine the rate relief process since our 2006 report. STB has made changes to the rate relief process that it believes will reduce the expense and delay of obtaining rate relief. While these appear to be positive steps that could address longstanding concerns with the rate relief process, it is too soon to determine the effect of these changes to the process, and we have not evaluated the effect of these changes. Mr. Chairman, this concluded my prepared statement. I would be happy to respond to any questions you or other Members of the Committee may have at this time. For questions regarding this testimony, please contact JayEtta Z. Hecker on (202) 512-2834 or [email protected]. Individuals making key contributions to this testimony include Steve Cohen (Assistant Director), and Matt Cail. Freight Railroads: Updated Information on Rates and Competition Issues. GAO-07-1245T. Washington, D.C.: Sept. 25, 2007). Freight Railroads: Updated Information on Rates and Other Industry Trends. GAO-07-291R. Washington, D.C.: Aug. 15, 2007. Freight Railroads: Industry Health Has Improved, but Concerns About Competition and Capacity Should Be Addressed. GAO-07-94. Washington, D.C.: Oct. 6, 2006). Freight Railroads: Preliminary Observations on Rates, Competition, and Capacity Issues. GAO-06-898T. Washington, D.C.: June 21, 2006. Freight Transportation: Short Sea Shipping Option Shows Importance of Systematic Approach to Public Investment Decisions. GAO-05-768. Washington, D.C.: July 29, 2005. Freight Transportation: Strategies Needed to Address Planning and Financing Limitations. GAO-04-165. Washington, D.C.: December 19, 2003. Railroad Regulation: Changes in Freight Railroad Rates from 1997 through 2000. GAO-02-524. Washington, D.C.: June 7, 2002. Freight Railroad Regulation: Surface Transportation Board’s Oversight Could Benefit from Evidence Better Identifying How Mergers Affect Rates. GAO-01-689. Washington, D.C.: July 5, 2001. Railroad Regulation: Current Issues Associated with the Rate Relief Process. GAO/RCED-99-46. Washington, D.C.: April 29, 1999. Railroad Regulation: Changes in Railroad Rates and Service Quality Since 1990. GAO/RCED-99-93. Washington, D.C.: April 6, 1999. Interstate Commerce Commission: Key Issues Need to Be Addressed in Determining Future of ICC’s Regulatory Functions. GAO-T-RCED-94-261 Washington, D.C.: July 12, 1994. Railroad Competitiveness: Federal Laws and Policies Affect Railroad Competitiveness. GAO/RCED-92-16. Washington, D.C.: November 5, 1991. Railroad Regulation: Economic and Financial Impacts of the Staggers Rail Act of 1980. GAO/RCED-90-80. Washington, D.C.: May 16, 1990. Railroad Regulation: Shipper Experiences and Current Issues in ICC Regulation of Rail Rates. GAO/RCED-87-119. Washington, D.C.: September 9, 1987. Railroad Regulation: Competitive Access and Its Effects on Selected Railroads and Shippers. GAO/RCED-87-109, Washington, D.C.: June 18, 1987. Railroad Revenues: Analysis of Alternative Methods to Measure Revenue Adequacy. GAO/RCED-87-15BR. Washington, D.C.: October 2, 1986. Shipper Rail Rates: Interstate Commerce Commission’s Handling of Complaints. GAO/RCED-86-54FS. Washington, D.C.: January 30, 1986. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Staggers Rail Act of 1980 largely deregulated the freight railroad industry, encouraging greater reliance on competition to set rates. The act recognized the need for railroads to recover costs by setting higher rates for shippers with fewer transportation alternatives but also recognized that some shippers might be subject to unreasonably high rates. It established a threshold for rate relief and granted the Interstate Commerce Commission and the Surface Transportation Board (STB) the authority to develop a rate relief process for "captive" shippers. Since 1980 GAO has issued several reports on the freight railroad industry and issued the most recent report in October 2006 and, at the request of this Subcommittee, issued an updated report in August 2007. This statement is based on these recent reports and discusses (1) recent changes that have occurred in railroad rates and how those changes compare to changes in rail rates since 1985, (2) the extent of captivity in the industry and STB's efforts to protect captive shippers, and (3) STB's actions to address GAO's recent recommendations. While railroad rates have generally declined and declined for most shippers since 1985, in 2005 rates experienced a 9 percent annual increase over 2004 --the largest annual increase in twenty years--and rates increased for all 13 commodities that GAO reviewed. For example, rates for coal increased by nearly 8 percent while rates for grain increased by 8.5 percent. However, despite these increases, rates for 2005 remain below their 1985 levels and below the rate of inflation over the 1985 through 2005 period. Revenues that railroads report as "miscellaneous"--a category that includes some fuel surcharges--increased more than ten-fold from about $141 million in 2000 to over $1.7 billion in 2005. It is difficult to precisely determine how many shippers are "captive" because available proxy measures can overstate or understate captivity. However some data indicate that the extent of potentially captive traffic appears to have decreased, while at the same time, data also indicates that traffic traveling at rates significantly above the threshold for rate relief has increased. In October 2006, GAO reported that STB's rate relief process to protect captive shippers have resulted in little effective relief for those shippers. GAO also reported that economists and shipper groups have proposed a number of alternatives to address remaining concerns about competition--however, each of these alternative approaches have costs and benefits and should be carefully considered. STB has taken some actions to address our past recommendations, but it is too soon to determine the effect of these actions. Our October 2006 report noted that the continued existence of pockets of potentially "captive" shippers raised questions as to whether rail rates in selected markets reflected reasonable pricing practices, or an abuse of market power. GAO recommended that the Board undertake a rigorous analysis of competitive markets to identify the state of competition. STB has awarded a contract to conduct this study. It will be important that these analysts have STB's authority and access to information to determine whether rail rates in selected markets reflect reasonable pricing practices--the Chairman of the STB recently testified that these analysts would have that authority and access. GAO also recommended that STB ensure that freight railroads are consistently reporting all revenues, including miscellaneous revenues. While STB has revised its rules on fuel surcharges, these rules did not address how fuel surcharges are reported and STB has not yet taken steps to accurately collect data on other miscellaneous revenues. STB has also taken a number of steps to revise its rate relief process. While these appear to be promising steps, it is too soon to tell what effect these changes will have.
In recent years, airlines have increasingly charged fees for optional services. Since the U.S. airline industry was deregulated in 1978, the industry’s earnings have been volatile. In calendar years 2008 and 2009, the U.S. passenger airline industry incurred nearly $4.4 billion in operating losses, due largely to high jet fuel prices—airlines’ biggest operating expense in 2008—combined with a severe economic downturn that reduced passenger traffic. In response to these and other economic challenges, airlines began in 2008 to “unbundle” optional services from the base ticket price, thereby charging separate fees for services that were previously included in the ticket price. Revenues from fees for optional services continue to grow. In fiscal year 2016, airlines reported $200 billion in revenue, about $7.1 billion of it from the two optional services fees for which revenues are separately reported to DOT—$4.2 billion in baggage fee revenue and $2.9 billion from fees for changing reservations. The passenger airline industry is primarily composed of network (or “legacy”), low-cost, and regional airlines. Network airlines were in operation before the Airline Deregulation Act of 1978 and support large, complex hub-and-spoke operations with thousands of employees and hundreds of aircraft. These airlines provide service at various fare levels to a wide variety of domestic and international destinations. Low-cost airlines generally entered the market after deregulation and tend to operate less costly “point-to-point” service using fewer types of aircraft. Regional airlines generally employ much smaller aircraft (up to 100 seats) and provide service under “code-sharing” arrangements with larger network airlines for which they are paid on a cost-plus or fee-for- departure basis to provide network capacity. Both network and low-cost airlines charge fees for a variety of optional services. Regional airlines may also charge optional fees, such as baggage fees; however, those fees are generally determined by the network airline in the code-sharing partner agreement. While charges for some services—such as unaccompanied minors, reservation changes or cancellations, and oversized or overweight baggage—have existed in the airline industry for many years, other services, such as wireless internet access and on- demand entertainment access, are new. See figure 1 for examples of optional service fees. Some optional services can be purchased in advance when booking airline tickets. For example, customers can purchase optional services when booking tickets directly from the airline (i.e., from the airline’s website, by calling the airline’s call center, or from the airline’s ticket counter at the airport). Customers who purchase tickets from third parties, such as online travel agents (e.g., Priceline or Expedia) and traditional or corporate travel agents, may also have the option to purchase some optional services when booking tickets, but this option varies depending on the airline and third party. Customers may also obtain some information about flight schedules, fares, and some optional services from metasearch companies (e.g., Google or Kayak); however, the information on optional services available through these websites varies. Generally, online travel agencies, traditional or corporate travel agents, and metasearch companies obtain airfare and optional service fee information from global distribution systems, which are companies that package airline information so that travel agents can query and “book” (i.e., reserve and purchase) flights for airline customers. DOT has authority to investigate whether a U.S. air carrier, foreign air carrier, or ticket agent has been, or is engaged, in an unfair or deceptive practice or an unfair method of competition in air transportation or the sale of air transportation. Upon finding that a U.S. air carrier, foreign air carrier, or ticket agent is engaged in such a practice or method, DOT has the authority to order the regulated entity to stop the practice or method. Under this authority, in April 2011, DOT issued “Consumer Rule 2” which included several provisions related to increasing the transparency of airfares and optional service fees for consumers. This rule—which became completely in effect in January 2012—requires, among other things, that certain U.S. and foreign air carriers disclose information about their optional service fees on their websites and refund passengers’ baggage fees if their bags are lost. The transparency regulations that went into effect after the issuance of Consumer Rule 2 are summarized in table 1. DOT’s Office of Aviation Enforcement and Proceedings (OAEP), and its Aviation Consumer Protection Division (ACPD), monitor and enforce airline compliance with economic regulations, such as advertising requirements related to the disclosure of airline fares and optional service fees, among others. Consumers may file air-transportation-related complaints with DOT. Consumers may also file air-transportation-related complaints with airlines, and airlines are required to acknowledge and respond to each complaint. Additionally, DOT may require airlines to file reports and keep records. DOT is authorized to inspect regulated entities’ records and collect transportation information from regulated entities. DOT may assess civil penalties against airlines for violating the statute prohibiting unfair and deceptive practices and unfair methods of competition, and any regulations promulgated under that authority. Since 2010, U.S. airlines have introduced a variety of new optional- service fees and bundled products and increased the price of some existing fees. Some fees for optional services, like first and second checked bag fees on network airlines, have not changed considerably since 2010. However, some airlines increased other fees, such as fees for overweight and oversized bags and reservation changes and cancellations. According to DOT data, from 2010 to 2016, airline revenues from baggage fees and reservation-change and cancellation fees increased as did the number of passengers. Several U.S. airlines have introduced new fees since 2010 for services that used to be included in the ticket price, notably “preferred” seats within the economy cabin. For example, several network airlines—including Alaska, American, and Delta—created fees for upgrading to preferred seats, which are more desirable seats in the economy cabin of the aircraft, such as those located in an exit row, toward the front of the aircraft or with additional legroom. Preferred seats may also include priority boarding or food and beverages, depending on the airline. The characteristics of preferred seats differ among airlines, even when the products’ names sound similar. For example, both American and Hawaiian have a product called Preferred Seating, but American’s product refers to standard legroom in more favorable locations, whereas Hawaiian’s product refers to more legroom and priority boarding, among other things. In addition, some airlines offer more than one type of preferred seat. For example, in addition to its Preferred Seating product, American also offers a product called Main Cabin Extra, which includes additional legroom and priority boarding. As of July 2010, Frontier, JetBlue, and Spirit had already instituted fees for preferred seats, and Allegiant started offering preferred seats to its passengers in 2014. Southwest, however, does not have any assigned seating and therefore does not sell preferred seating. Instead, Southwest allows some customers to pay for early boarding, which increases the customer’s ability to select a desired seat. Table 2 shows airlines’ different approaches to charging fees for preferred seating. The pricing of preferred seats is not always apparent to customers on airlines’ websites, unless the customer selects or begins to book a specific flight. For example, some airlines, like Spirit and United, specify the range of prices, which may vary based on the route. Other airlines, like Alaska and Frontier, provide the minimum possible price but do not specify the maximum a customer might pay for the preferred seat. One of the selected airlines in our review, Delta, did not have preferred-seating prices available to customers browsing the website. All of the selected airlines provide detailed pricing on preferred seats if the customer selects or begins to book a specific flight. From 2010 to 2017, U.S. airlines introduced other new fees such as fees for carry-on bags, beverages, wireless internet access, and priority boarding. For example, three low-cost airlines implemented new fees for carry-on bags. Spirit introduced a fee for bringing a large bag into the cabin in 2010, as did Allegiant in 2012 and Frontier in 2013. None of the network airlines currently charge for carry-on bags. Allegiant and Frontier also began to charge customers for non-alcoholic beverages in 2012 and 2013, respectively, while Spirit already charged for these products. Since 2010, some U.S. airlines began charging for services that were not previously available. For example, Southwest first offered and charged for wireless internet access in 2011, and JetBlue began charging for expedited security screening and early boarding in 2011. While some customers are electing to pay extra for optional services, others are purchasing tickets that are priced lower and include optional service restrictions. For example, since 2015, American, Delta, and United have introduced Basic Economy fares. Passengers choosing to purchase Basic Economy tickets are assigned seats after checking in, meaning that they might not be seated with the rest of their travel group; board the aircraft last; cannot upgrade seats or class of service; and cannot change their flights. In addition, American and United Basic Economy passengers may not stow belongings in overhead compartments and are limited to one carry-on bag that fits under the seat in front of them. In yet another purchasing option, since 2010 several U.S. airlines have introduced packages of optional services that are sold together as a bundle instead of individually and can be purchased on top of or along with the base fare. The contents of these bundles vary greatly among airlines. For example, Frontier has two packages that include carry-on and checked bags, seat selection, and priority boarding; one of the packages also allows customers to change or cancel their tickets for full refunds. Other airlines’ bundles include the base ticket as well as other optional services, such as JetBlue’s “Blue Plus” that adds one checked bag to the basic fare. On some airlines, bundled packages also overlap with preferred seating; for example, on Hawaiian, an Extra Comfort seat provides a seat with additional legroom, priority boarding and security screening, entertainment pack, and the use of a pillow and a blanket. Figure 2 illustrates different ways that airline passengers can elect to purchase optional services, depending on the airline. From 2010 to 2017, fees for first and second checked bags on U.S. network airlines generally remained unchanged, while low-cost airlines generally increased these fees (see table 3). Among the five network airlines in our selection, only Alaska increased its first bag fee—from $20 to $25, which is the same price charged by other network airlines. Delta and Hawaiian did not increase their fee for first and second checked bags; however, they eliminated a $3 discount that was previously available for paying bag fees online in advance of a flight. Among low- cost U.S. airlines with which we spoke, Allegiant, Frontier, and Spirit each increased the fee range for the first and second checked bags from 2010 to 2017. These three airlines charge varying baggage fees based on when the passenger pays the fee; specifically, paying a bag fee online and in advance of the flight is less expensive than paying the bag fee at the airport on the day of travel. Southwest does not charge for a first or second checked bag, opting to use “bags fly free” as part of its marketing strategy. Fees for other optional services—namely, fees for overweight and oversized bags, reservation changes and cancellations, and unaccompanied minors—generally increased from 2010 to 2017 on both network and low-cost U.S. airlines, as shown in table 3. Notably, roughly half of the airlines in our selection increased the overweight bag and unaccompanied minor fees, while a majority of the airlines in our selection increased reservation change or cancellation fees. Specifically, from 2010 to 2017, 7 of the 11 selected airlines increased their fees for checking overweight bags. In 2010, overweight bag fees ranged from $50 to $175, and in 2017, they ranged from $30 to $200. In addition, 6 of the selected airlines increased oversized bag fees, while the rest of the selected airlines either narrowed the cost of checking an oversized bag (e.g., Delta charged from $175 to $300 in 2010, but later switched to a flat $200 fee) or did not change the fees. In 2010, oversized bag fees ranged from $35 to $300, and in 2017, they ranged from $75 to $200. Six of the 10 selected airlines that charged reservation change and cancellation fees increased those fees from 2010 to 2017. (Southwest does not charge a reservation change or cancellation fee.) In 2010, the selected airlines charged from $50 to $150 to change or cancel a domestic reservation; in 2017, this fee ranged from $50 to $200. Five of the 11 airlines increased unaccompanied minor fees. According to airline financial data submitted to BTS, U.S. airline revenues from baggage fees and reservation change and cancellation fees—the only fees for which revenues are separately reported—increased from a total of $6.3 billion in 2010 to $7.1 billion in 2016 in constant 2016 dollars. Specifically, revenues from baggage fees rose from $3.7 billion in 2010 to $4.2 billion in 2016 in constant 2016 dollars, an increase of nearly 12 percent. Similarly, revenues from reservation change and cancellation fees increased from $2.5 billion in 2010 to $2.9 billion in 2016 in constant 2016 dollars, an increase of more than 14 percent. Combined revenue from bag and reservation change and cancellation fees made up 3.3 percent of airlines’ operating revenues in 2010 and 3.5 percent of operating revenues in 2016. While revenue from baggage and reservation change and cancellation fees has increased, so has the number of passengers traveling on U.S. airlines. From 2010 to 2016, the number of passenger enplanements and the revenue from these optional services increased at similar rates. As discussed earlier, total enplanements on U.S. airlines increased by about 14 percent, from about 721 million in 2010 to 825 million in 2016. It is worth noting that, unlike the revenues from domestic airfares, revenues from most optional service fees are not subject to the excise tax that helps fund the Airport and Airway Trust Fund, which partially supports the Federal Aviation Administration and the operation of the air traffic control system. This issue was discussed in depth in our 2010 report and remains relevant as the amount of airline revenue generated by optional service fees increases. Airline officials said that airlines charge separately for optional services to better compete with other airlines. Officials from 9 of the 10 airlines with whom we spoke said that selling optional services separately from the base fare allows airlines to reduce the base ticket price. One airline official explained that customers make purchasing decisions based primarily on the base ticket price—the cost of flying from one point to another. According to this airline official, lowering the base fare therefore helps an airline compete with other airlines. Some airline officials cited other ways in which unbundling can lower ticket prices. For example, one airline official said that baggage fees have prompted customers to travel with fewer bags or no bags. As a result, the plane weighs less, which reduces fuel costs and, in turn, can allow the airline to reduce the base ticket price. Four other airline officials said that the lower base fares resulting from unbundling optional services have made flying more affordable to more people, thereby increasing the number of people who decide to travel by air. Officials from two airlines said that airfares have decreased over time, and an official from Airlines for America (A4A)—the U.S. airline trade association—cited BTS data during a May 2017 congressional hearing to show that consumers are paying less for airfare than they had previously; however, these data have some limitations. Data compiled by BTS indicate that the average domestic airfare decreased from $370 in 2010 to $349 in 2016 in constant 2016 dollars, a decrease of 5.6 percent. But, the fares include only base fares plus applicable government taxes and do not include all optional service fees. As a result, they do not represent the total amount that some customers may be paying to travel. In addition, according to DOT officials, DOT does not weight one-way tickets differently than round-trip tickets when calculating the average fares. DOT officials told us that customers are more likely to purchase one-way tickets now than they were 10 years ago because airlines no longer charge a premium for one-way tickets. As a result, a higher share of one- way tickets would result in lower average fares. Lastly, it is difficult to determine all the factors that could have caused this decrease in airfare, as several economy-wide changes, including those in energy prices, affect fares. However, some studies have examined the effect bag fees may have had on ticket prices. To examine this issue, we conducted an economic literature search for any published, peer-reviewed studies that examined the introduction of bag fees by U.S. airlines and the effect on fare prices. The three studies that met our criteria (as described in appendix II) found that although the introduction of bag fees may have led to a decrease in average fares, the total price paid by customers who checked a bag may not have decreased on average. Specifically, these studies found that charging separately for bags reduced fares by less than the new bag fee itself. As a result, customers who paid for checked bags paid more on average for the combined airfare and bag fee than when the airfare and bag fee were bundled together. Conversely, passengers who did not check bags paid less overall. The results of these three studies are summarized below. The authors of a 2012 study measured the impact of baggage fees on airline fares using DOT data from 2006 to 2009. They noted that airlines introduced bag fees to generate additional revenues without increasing fares, which would adversely affect demand. They found that for an airline charging a bag fee, a one-dollar increase in those fees resulted in a $0.24 decrease in fares, which means that a passenger checking one bag, would pay $0.76 more on these airlines. According to the authors, these results imply that airlines with bag fees lower fares to appear more competitive and then make up the lost revenue when passengers pay to check bags. In a 2015 study, the authors analyzed DOT quarterly data from 2008 to 2009 and found that adoption of a bag fee resulted in about a 3 percent reduction in average airfares. Analyzing non-stop flights and those with connections separately, they found that a bag fee led to a 2.7 percent and a 2.4 percent average-fare reduction for non-stop and connecting flights, respectively. The authors pointed out that, since these declines translated into an amount that was less than the bag fee, on average the combined total of the fare and bag fee increased. However, according to the authors, the decline could be greater than the bag fee for some passengers because the decline in average fare varies with route characteristics. In another 2015 study, the authors studied a sample of U.S. domestic routes over the period 2007–2010, which covers the period when bag fees were first introduced (in 2008) and when many carriers increased bag fees (in 2010). To analyze the effect of bag fees on passenger demand and fares, the authors focused on a set of domestic airport- to-airport routes where passengers could choose between airlines that charged fees for checked baggage and Southwest, which allowed passengers one or two “free” checked bags. The authors found that a one-dollar increase in bag fees led to an $0.11 reduction in fares and a loss of 0.6 passengers. On the other hand, they found that a one- dollar fare increase resulted in a loss of seven passengers. Thus, they determined that bag fees allowed airlines to increase their revenues with a much lower reduction in passenger demand than a fare increase. Finally, their evidence suggests that there is an overall increase in total fares for passengers checking bags. Airline officials also said they charge separately for optional services to meet the needs of their customers. According to officials from 9 of the 10 selected airlines we interviewed, unbundling allows passengers to customize their flights by paying for only the services that they value—a benefit that one official cited as the overriding impetus for unbundling. That official described unbundling as an effort to make the airline’s entire product line of services available to customers and provide passengers with the ability to tailor their travel experience. Similarly, another airline official explained that they aim to cater to a broad range of customers and unbundling allows passengers to decide on the price and service level that is right for them. Airline officials from the 10 U.S. airlines that we interviewed cited various factors that contribute to their decisions about how to price optional services. Customer demand and willingness to pay: Officials from all 10 airlines that we interviewed said that customer demand and the price that customers are willing to pay for an optional service are important factors in pricing an optional service. Customers’ willingness to pay varies. Hence, when the price rises, some consumers who are not willing to pay a higher price stop purchasing, resulting in some loss of demand. Higher prices may thus result in higher or lower revenue depending on the extent to which the demand is reduced. For example, one airline official described how the airline would consider increasing the price for a preferred seat if the demand were high enough, indicating that there may be some customers willing to pay more for a preferred seat. An official from a different airline said that it conducts market testing to determine what optional services customers are interested in, and it may test products at different prices to determine the optimal price. Competitors’ prices for similar services: Officials from 8 of the 10 airlines that we interviewed said that they consider competitors’ pricing for similar services when they set fees for optional services, to ensure that their own product is priced competitively. One airline official said that because commercial aviation is a highly competitive industry, the official’s company closely monitors the market and makes adjustments to the price of services, as needed. Industry stakeholders with whom we spoke, as well as consumer advocates, believed that competition is a key factor in how airlines set fees for optional services. Customer service and satisfaction: Officials from 5 of the 10 airlines we interviewed said that customer service and satisfaction are factors in how they set prices. Officials from one airline stated that they try to keep optional service fees relatively low to prevent passengers from feeling overcharged. In at least one case, an airline official told us that this airline sets the price of one type of fee to prevent too many people from purchasing the service. For example, this airline official told us they set the price for wireless internet access high enough so that relatively few passengers will pay for it because too many users can affect the speed and quality of the service. Officials from 6 airlines said that they conduct customer surveys and adjust the price of optional services based on survey feedback. Cost: Officials from 3 of the 10 airlines that we interviewed said that the airlines’ cost to deliver a service is a major factor in how they charge for that service. One of these officials said that this airline conducts a business case analysis when developing a new product to ensure that the revenues from the new optional service exceed its cost. Officials from 3 additional airlines cited cost as a minor factor. Officials from 2 of these airlines said they incorporate cost into optional service pricing only for products for which cost is relatively easy to measure, such as food and beverages. Conversely, officials from 4 airlines did not cite cost as a factor in pricing optional services. Even airline officials who said that cost factors into their pricing decisions highlighted the complexity of calculating the precise cost of delivering many services. For example, one airline official explained that calculating the cost of cancelling a reservation requires consideration of the cost of the reservation system, corporate overhead, and possibly opportunity costs if the seat could not be re- sold. Another official from the same airline said that they closely track costs but do not necessarily have the ability to assign a specific cost to the provision of an optional service. This comment was echoed by several airline officials who said that calculating the cost of checking baggage, for example, requires consideration of a multitude of factors, including labor, ground infrastructure, and fuel costs. In addition, one airline official said that the airline does not always incur costs when offering some optional services, for example allowing a passenger to select a seat in a preferred location, such as a window seat or toward the front of a cabin, but the airline will sell the service because customers value it enough to pay for it. Industry stakeholders echoed the view that the cost of delivering optional services plays a minimal role in airlines’ pricing decision of optional services. One industry stakeholder we spoke with agreed and stated that the competitors’ prices and what customers are willing to pay are more important factors in how airlines set prices for optional services than the cost of delivering a product; this stakeholder said that the pricing of optional services is ultimately based on what will deliver the most revenue to the airline. DOT has taken a range of actions to improve transparency of U.S. airline fees for optional services since 2010, as described below. DOT conducts different compliance inspections of U.S. airlines to monitor compliance with its regulations on a variety of consumer traveler issues, including issues related to transparency of optional service fees. Since 2012, DOT has completed 19 compliance inspections of U.S. and foreign airlines according to documentation the department provided to us. DOT inspections are conducted on an ongoing basis, and according to DOT officials, have included repeated inspections of certain U.S. airlines that account for a significant percentage of U.S. enplanements. As part of these inspections, DOT inspectors review records onsite at airlines’ headquarters as well as information on airlines’ websites. As part of the website review, DOT verifies, among other things, that the website provides adequate information about optional service fees, that consumers are provided an opportunity to knowingly and voluntarily “opt-in” to purchase optional services, and that the airline posts its current contract of carriage on its website in an easily accessible form per DOT regulations. DOT also conducts additional targeted inspections to specifically assess airlines’ compliance with DOT’s consumer transparency regulations. For example, according to DOT documentation and officials, in 2012 DOT inspected 113 websites of U.S airlines, foreign airlines, and ticket agents (websites that were marketed to U.S. consumers) to monitor compliance with specific provisions of the 2011 Consumer Rule 2. DOT found that most of the websites generally complied with Consumer Rule 2 provisions; however, 10 airlines and 1 ticket agent faced enforcement actions for violations. According to DOT officials, the department documents any violations identified during inspections and contacts the airline to correct the violation. If the violation is long-standing and severe, DOT may take enforcement action, including imposing civil penalties. In other instances, airlines may receive a warning that enforcement action may be taken in the future if the violations are not corrected. According to DOT officials, in 2016, DOT issued 22 consent orders against airlines related to aviation consumer rule violations and assessed $5,955,000 in civil penalties. DOT also analyzes and investigates passenger complaints about optional service fees that it receives via its website, mail, and telephone hotline. In 2014, DOT established a separate complaint category for optional service fees and began tracking the number of these complaints. DOT receives fewer complaints related to optional service fees than other topics, according to DOT officials. For example, DOT officials told us that they received in 2016, a total of 17,904 complaints of which 862—about 5 percent—were regarding airline fees for optional services. According to DOT officials, the two largest complaint categories that DOT receives are regarding flight problems (e.g., delayed flights) and baggage problems (e.g., lost or damaged bags). We requested and reviewed a selection of 2016 complaints related to optional service fees and found that complaints included concerns that fees for changing or cancelling reservations, transporting bags, and selecting seats were too high or that information about these fees was not transparent or fully disclosed to the customer. DOT analyzes passenger complaint data to identify trends and investigate possible violations of DOT regulations. According to DOT officials, under DOT’s process for handling complaints, when a complaint is received, a DOT official will review and categorize the information by type of complaint. DOT reviews the complaint to see whether a regulation applies and, regardless whether it does, forwards all complaints to the applicable airline for the airline to respond to the consumer. Airlines are required to acknowledge each complaint within 30 days of receipt and provide a substantive written response to each complainant within 60 days of receipt. After receiving and reviewing the complaint, if DOT determines the airline is in fact violating a regulation, DOT will ask for a copy of the airline’s correspondence with the complainant. According to DOT officials, airlines generally respond to these requests in a timely fashion. A consumer complaint regarding compliance issues with an existing regulation can trigger an investigation in which DOT looks for egregious or repeated violations; a pattern of violations can lead to enforcement action. For example, in 2016, DOT issued a consent order against VivaAerobus, a foreign airline, after DOT found that the airline was not disclosing baggage and other optional service fees in accordance with regulations. The airline was fined $150,000 in civil penalties. DOT requires that U.S. airlines report revenues from optional service fees to BTS, which helps increase transparency regarding the amount of revenues generated from these fees. Currently, U.S. airlines are required to report this revenue in one of four separate accounts: baggage fees, reservation change and cancellation fees, other transport-related fees, and miscellaneous fees. Because revenues from baggage fees and reservation change and cancellation fees have their own accounts, the revenues from these particular fees can be tracked. However, according to DOT documentation, revenues from other optional service fees are reported either in the transport-related or miscellaneous fees accounts, which include revenue from optional services as well as from other sources. For example, according to DOT guidance, the transport-related revenue category includes not only revenue from all onboard sales (such as, food, drink, entertainment, and wireless internet access) but also revenue from fuel or airplane parts sold to other airlines. Similarly, according to DOT’s guidance, the miscellaneous category includes, for example, revenue for transporting unaccompanied minors and pets, as well as revenue from sales of miles to airlines, credit card companies, hotels, rental cars, or other business partners that are frequent flyer partners. From 2010 to 2016, revenue for the transport-related account increased by 10 percent, from $36.5 billion to $40.1 billion in constant 2016 dollars. At the same time, revenue from the miscellaneous account increased by 87 percent, from $3.3 billion to $6.2 billion in constant 2016 dollars; this 87 percent increase was the largest increase of all four accounts. Over the same years, U.S. airlines’ total operating revenues increased from $192.3 billion to $200.4 billion in constant 2016 dollars. Because the DOT data do not separate the revenue reported from optional service fees from the other types of revenues that are reported in the transport-related and miscellaneous fees accounts, we could not determine how much of the total revenue reported in these accounts should be attributed to optional service fees. In 2010, we reported on this issue and stated that without complete data it is difficult for policy makers and regulators to determine total revenues from optional service fees and the fees’ effect on the industry. We concluded that without collecting revenue from optional service fees in a separate account, it was difficult to determine the amount of total optional service fee revenues that airlines collect. We recommended that DOT require airlines to report all optional service fees paid by passengers related to their trip into a separate category, exclusive of baggage fees and reservation change and cancellation fees (for which separate categories already exist). Citing our recommendation, DOT initiated a rulemaking in 2011 that proposed requiring airlines to report optional service fee revenues in 23 separate categories. However, the final rule has not yet been published and DOT has not taken any recent action on this rulemaking. According to DOT officials, DOT rulemakings are currently being evaluated in accordance with Executive Orders 13771 and 13777; thus, the schedules for many ongoing rulemakings are still to be determined. In our 2010 review we also found differences in how airlines report some optional services fee revenues. More specifically, in 2010, we found that airlines were reporting revenues from the same optional service fees into different accounts. Based on responses from our selected airlines, this issue persists. For example, we found that some airlines accounted for revenues from unaccompanied minor fees as transport-related revenue while others reported them as miscellaneous operating revenue, and two airlines reported fees for unaccompanied minors as revenue from reservation cancellation fees. We also found differences in how airlines reported revenue from preferred seating, upgraded seats, seat selection, and priority security screening. In some cases, we found airlines reported fees inaccurately despite guidance, and in others, it was not clear from DOT’s guidance how certain fees should be categorized. DOT’s guidance has not been updated since 2009, and according to officials, there are no current plans to do so. As previously discussed, since 2010 airlines have introduced a number of new fees and products for optional services, and determining how to report revenue from these fees into the existing four accounts may not be clear. However, even if DOT were to revise its guidance and provide more detailed information on how to categorize different fees, it would still not be possible to understand how much revenue is generated just from optional service fees because airlines would still be required to report this information in accounts that include revenue from other non-fee sources. Implementation of our 2010 recommendation that DOT require airlines to report all optional service fees, exclusive of baggage fees and reservation-change and cancellation fees, into a separate category would provide airlines with a clearer understanding of how to report revenue from specific optional service fees and provide the missing data on how much revenue is generated from optional service fees. DOT has taken several actions to educate airlines and consumers about existing regulations and consumer rights related to optional service fees, for example: In 2011, after the issuance of Consumer Rule 2, DOT conducted informational sessions with the airlines about the requirements of the new regulations. Additionally, DOT developed and issued guidance that provided answers to frequently asked questions regarding the new regulations. DOT provides consumers with information about their rights related to optional services through various publications available on its website. For example, DOT publishes “Fly Rights: A Consumer Guide to Air Travel” which provides information on a range of topics including airline fees, general refund policies, and information about DOT regulations. DOT has a webpage, “Air Travel Tips,” where it publishes a collection of tips and information to help airline passengers. These airline tips cover a wide range of topics, including information on how to file a complaint, DOT’s 24-hour refund policy, and airline fees. DOT also publishes a monthly “Air Travel Consumer Report.” This report provides consumers with information on a range of topics, such as information about aviation consumer complaints filed with DOT. According to DOT officials, these monthly consumer reports present information in which customers are most interested. Representatives from consumer advocacy organizations and industry groups representing global distribution system (GDS) companies, the online travel agent industry, and metasearch companies told us that DOT’s 2011 regulations have had a positive impact such as increased transparency regarding optional service fees. In particular, three consumer groups told us that the Full Fare Price Advertising regulation has resulted in more transparent pricing of airfares across the industry and has reduced instances of misleading airfare advertising. Three consumer and two industry groups also told us that DOT’s regulation requiring airlines to disclose all optional service fee information on their websites has been a positive step for the industry and has increased consumers’ understanding of how they may be charged for different services. While consumer groups recognized DOT’s progress in this area, they also reported a range of issues, discussed below, that persist related to the transparency of fees for optional services. Industry officials shared similar views. DOT has initiated several rulemakings in this area that might address these issues, but these rulemakings are still ongoing. While consumers are often able to obtain information about optional services and then purchase them directly from an airline’s website, information about these services and the ability to purchase these services is not always available from indirect sources, such as online travel agents. According to various estimates, about 50 percent of airline ticket sales occur through indirect sources. All four consumer advocacy groups that we spoke with told us that not being able to obtain information about optional services and purchase optional services from indirect sources at the time of booking decreases consumers’ ability to determine the full cost of their travel. A representative from one consumer advocacy group stated that having optional service fee information displayed alongside the airfare on online travel sites is important because it allows consumers to better compare different fares at the onset of their purchasing process. Two industry officials we spoke with agreed that not being able to purchase optional services through third parties decreases consumer transparency. One of these officials said that some basic optional services—seat selection in particular—are important to consumers, who prefer to be able to purchase these services when they purchase their tickets. In our interviews with airlines, officials from 8 of the 10 airlines we spoke with said that they make information about optional services, such as baggage fees, available to third parties, but that the level of information available about these fees varies across online travel agents. In addition, officials from 3 of the 10 airlines that we interviewed told us that they make optional services available for purchase through indirect channels. According to these officials, the type of optional services available for purchase differs by airline and by distribution channel. Officials from the remaining 6 airlines told us that they sell products primarily through direct channels (i.e., their own websites or customer service) because (1) they have more control over how those products are marketed to consumers and (2) the third-party websites have technical limitations in how they can display and sell optional service products. For example, one airline official stated that his airline can better differentiate its products from other airline’s products on its own website than on an online travel agency or metasearch company site. In addition, officials from the International Air Transport Association (IATA) told us that the industry as a whole is taking steps to develop standards and capabilities for optional services to be more widely available for purchase through GDSs and online travel agencies. In 2010, we recommended that DOT require airlines to disclose baggage fees and policies along with fare information across all sales channels used by the airline. In 2017, DOT issued a Supplemental Notice of Proposed Rulemaking and a Request for Information related to how information about optional service fees would be distributed to ticket agents, including GDSs, online travel agents, and metasearch companies, that may address our recommendation. However, in March 2017, DOT indefinitely suspended the public comment period for the proposed rule and information request to allow the President’s appointees the opportunity to review and consider the actions. In addition, this rule and information request may address some of the issues raised by consumer and industry groups that we interviewed, but not all. For example, while the Supplemental Notice of Proposed Rulemaking proposes requiring covered carriers to provide baggage fee information to all ticket agents that distribute fare and schedule information, it does not require that the information be made transactable (i.e., to require that airlines permit online travel agencies to sell these optional services). In addition, the proposed rulemaking would only require that information about baggage fees be made available, not other types of optional service fees. The Request for Information asks for comments from interested parties on whether airline restrictions on the distribution or display of airline flight information harm consumers and constitute an unfair and deceptive business practice or an unfair method of competition, among other questions. As previously mentioned, according to DOT officials, this rulemaking and request for information are currently being evaluated in accordance with Executive Orders 13771 and 13777, and the schedules for many ongoing rulemakings are still to be determined. Representatives from the four consumer groups and three industry groups we spoke with also noted that as a result of the variety of new optional service fees, bundled products, and fares that airlines now offer, it has become increasingly difficult for consumers to compare airfare ticket prices, fees, and associated rules, and understand what is included in their purchases. As previously discussed, airlines have increased the number of fees for optional services and have begun introducing different optional service bundles and fares. Representatives from two consumer groups whom we interviewed said that even though airlines are required to have a “static page” on their website listing all optional service fees, these lists can be lengthy and can include several different fees with different associated rules that the consumer would need to interpret and understand. One consumer group representative stated that for consumers, comparing fares and optional service fees across multiple airlines can be challenging and time-intensive. Representatives from four consumer groups also noted that with the emergence of different fare products, there is greater potential that consumers might not fully understand what they are purchasing and what is included in the fare. According to one consumer advocate, when a single airline is offering three different economy products, consumers may not understand how these products differ, and this lack of clarity is even more acute on third- party websites. According to DOT officials, they contact airline officials for additional information when airlines introduce new products that DOT believes will significantly affect consumers. DOT’s goal is to ensure that relevant information about conditions and restrictions of new products is accurately disclosed to consumers in a timely manner. For example, as discussed earlier in this report, since 2015, American, Delta, and United have introduced new Basic Economy fares. The key features of these fares are restrictions related to among other things, advanced seat selection, and, in the case of American and United, personal carry-on bags. According to DOT officials, they have monitored Delta’s introduction of Basic Economy and contacted American and United to get information about the airline’s plans to inform consumers about these fares and restrictions. DOT officials told us that going forward, they intend to monitor complaints related to Basic Economy fares to determine if consumers are experiencing any issues associated with these tickets. While airlines are required to provide customers with a contract-of- carriage document, which generally includes information about an airline’s optional service fees and policies, consumer advocates have raised concerns that these documents are often lengthy and difficult to understand. Each airline has its own contract of carriage, which is the legally binding contract between the airline and its passengers. These documents are important because they provide useful information to consumers about the individual airline’s contract terms, policies, and rules related to different services such as check-in deadlines, responsibility for delayed flights, and optional service refund policies—all of which can vary across airlines. Any term or condition of this contract is legally binding on the airline and the passenger and may be enforced in court. However, according to three consumer advocates we spoke with these documents are often lengthy and can be filled with legal jargon, making the documents difficult to understand. We reviewed contracts of carriage for the 11 selected airlines and found that they ranged from approximately 17 to 74 pages, with an approximate average length of 40 pages. In addition, we tested the 11 contract-of-carriage documents with an automated grade-level readability test and found they require a reading level of someone with a college graduate degree. According to DOT officials, the DOT Advisory Committee for Aviation Consumer Protection had recommended that DOT take steps to require airlines to simplify language in their contract-of-carriage documents. DOT officials stated that the department opted to not take specific action in this area because it did not want to get involved in contracts between airlines and passengers. However, the DOT Advisory Committee for Aviation Consumer Protection subsequently recommended in 2012 that DOT work with the airlines to survey how they define certain terms frequently used in their contracts of carriage and customer service plans. The department worked with A4A to develop such a document, which DOT then placed on its web site to assist consumers with understanding the terms and conditions of their travel. We reviewed this document and found that it provides an explanation of frequently used terminology in the airline industry and provides links to information about DOT’s consumer protection regulations, such as regulations related to baggage fee and code-share disclosures, and denied-boarding compensation requirements. According to a representative from A4A, airlines have committed to reviewing their contracts of carriage to see if they can be simplified to improve transparency. We provided a draft of this report to DOT for review and comment. DOT officials provided technical comments that we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of the Department of Transportation, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Other countries have taken similar actions to the United States to increase consumer transparency of airline optional service fees. Specifically, the European Union (EU), Canada, Malaysia, and the United Kingdom (UK) have all enacted laws that include provisions related to increasing the transparency of airline optional service fees. For example, in 2008, the EU enacted Regulation (EC) No 1008/2008 on Common Rules for the Operation of Air Services in the Community, which established, among other things, specific requirements related to disclosing information about airline optional service fees. The law requires that airlines display the final price of a ticket, inclusive of all applicable taxes, charges, surcharges and fees unavoidable and foreseeable at the time of publication. The law also requires that the final price is disclosed at all times during the booking process and that the applicable conditions are published. In addition, airlines must disclose optional service fees in a clear, transparent, and unambiguous way at the start of the booking process, such as through a separate link on the airline’s website, and airlines must ensure that optional services are only offered on an “opt-in” basis. As shown in table 4, other countries have implemented similar laws. In addition, in 2017, Canada introduced consumer protection legislation related to air transportation that would require, among other things, that the Canadian Transportation Agency promulgate regulations establishing air passenger rights. There are some differences between the U.S. and foreign laws. For example, Part V, Division III of Canada’s Air Transportation Regulations requires advertisements of air services to specify all optional services offered, as well as their price or range of prices, including applicable taxes and fees charged by government or public authority. In addition, the regulations prohibit information being provided in an advertisement in a manner that would interfere with the ability of anyone to readily determine the price for the air service, including for any optional incidental service. The Canadian law does not otherwise impose specific requirements as to how this information is to be made available to consumers. According to officials from Canadian Transportation Agency, airlines can decide how they want to communicate this information to the consumer. Also, according to officials from the European Commission, a 2014 European Court of Justice decision determined that airlines cannot charge for bringing hand bags or carry-on luggage into the plane’s cabin. According to officials, the court’s rationale for the decision was that airlines do not provide the passenger any services regarding carry-ons, as passengers carry their own bags. Officials from the regulatory bodies that we interviewed said that they monitor airline compliance with consumer transparency laws through various methods. For example, according to a European Commission official, the European Commission and the authorities of EU member states (countries) monitor airline compliance with consumer transparency laws and have carried out several reviews to determine the level of compliance with Regulation (EC) No 1008/2008 in different EU countries. According to documentation provided by the European Commission, in 2013, the European Commission, in conjunction with national authorities, initiated reviews of 552 websites and found that 382 websites were not compliant with the EU consumer transparency law. One problem they found was that optional services, including fees for baggage, insurance, and priority boarding, were not being offered on an “opt-in” basis. In addition, the European Commission conducted a “fitness check” of aviation regulations in 2013 to determine whether the airline consumer transparency laws and other consumer protection laws, were meeting their objective and whether any changes to these laws were needed. According to documentation provided by the European Commission, the review found that the laws were meeting their objectives, but that EU member states faced some challenges with enforcing these laws and could benefit from further coordination and sharing of best practices in this area. According to an official from the European Commission, it has also recently begun an evaluation of Regulation (EC) No 1008/2008 to determine whether there are any areas of the law that could be improved. Similarly, officials from the UK and Canada have taken actions to assess compliance with existing consumer transparency laws. Officials from the United Kingdom’s Civil Aviation Authority (UK CAA) told us that in 2010 and 2011 they reviewed the websites of the top 20 airlines flying from the UK and two smaller UK airlines to assess compliance with Regulation (EC) No 1008/2008. According to UK CAA officials, at that time, they discovered that most airlines were not in compliance and all the airlines had to make some changes to their websites. Issues included failing to include unavoidable taxes, fees and charges in the headline price, pre- selecting optional extras and not separately disclosing information about optional service fees. UK CAA officials notified airlines that were not in compliance and provided them with information on the steps that the airlines needed to take to ensure that their advertisements met the requirements of the law. According to UK CAA officials, most of the airlines agreed to amend their websites; however, there were three airlines where the UK CAA had to take enforcement action to achieve compliance. According to UK CAA officials, compliance with the EU law has improved since 2010, and most airlines now have a link posted early in the booking process to baggage fees and other optional service fees. The UK CAA has also completed reviews of travel agents’ websites to assess compliance with Regulation (EC) No 1008/2008, according to UK CAA officials. Canadian officials told us that the Canadian Transportation Agency monitors and enforces airline compliance with consumer transparency laws. To do so, they conduct periodic carrier inspections to assess compliance with Canada’s Air Transportation Regulations and conduct targeted investigations when needed. The Canadian Transportation Agency also reviews international tariffs to determine whether they set out all the information required by regulation and to assess whether airline policies with regard to optional service fees are clear, reasonable and not unduly discriminatory, according to Canadian officials. Malaysian officials that we spoke with stated that they are in the process of developing processes for monitoring and enforcing compliance with Malaysia’s consumer protection code. Officials from some of the regulatory bodies whom we interviewed said that they have taken other actions to educate airline consumers about consumer transparency laws related to optional service fees. For example, the UK CAA posts information on its website about existing optional service fees for the 20 largest airlines that travel to and from the UK, and the document is updated twice a year. According to UK CAA officials, they make this information available on their website to inform UK consumers about these fees and help consumers compare different optional service fees across different airlines. Similarly, the Malaysian Aviation Commission has a “Know Your Rights, Before You Fly” webpage, where, according to Malaysian officials, they post information for consumers, including their rights related to airline optional service fees. Our objectives for this report were to describe: (1) how selected U.S. airlines have modified their offering and pricing of optional services since 2010, (2) the factors that selected U.S. airlines consider when determining whether and how much to charge for optional services, and (3) the actions the Department of Transportation (DOT) has taken since 2010 to improve the transparency of optional service fees and views of selected aviation stakeholders about these actions. We also described the actions taken by selected regions or countries to improve consumer transparency related to airline optional service fees and presented this information in appendix I. To identify the ways in which selected U.S. airlines modified their offering and pricing of optional services since 2010, we first selected U.S. passenger airlines to examine. We used data from DOT’s Bureau of Transportation Statistics (BTS) on passenger enplanements and airline operating revenues. For the passenger data, we used the T-100 database, which includes traffic data for U.S. airlines traveling to and from the United States. These data represent a 100 percent census of all traffic. For the financial data, we used Form 41 quarterly financial filings to BTS, specifically Schedule P-1.2. We relied on the most recent available BTS data at the time we developed our airline selection. To assess the reliability of the BTS enplanements and operating revenue data, we reviewed documentation about the quality control procedures applied by BTS; analyzed the summary data for obvious errors; and interviewed BTS officials about how the data are collected, validated, stored, and protected. We determined that the data were sufficiently reliable for the purposes of identifying airlines to include in our selection for this audit work. We selected U.S. passenger airlines that: (1) reported annual operating revenues of at least $20 million in calendar year 2015, (2) had 1 million or more domestic passengers in calendar year 2015, (3) had at least 1,000 scheduled passengers in the third quarter of 2016, and (4) operate under their own brand. This selection process resulted in a list of 12 U.S. passenger airlines: Alaska Airlines, Allegiant Air, American Airlines, Delta Air Lines, Frontier Airlines, Hawaiian Airlines, JetBlue Airways, Sun Country Airlines, Southwest Airlines, Spirit Airlines, United Airlines, and Virgin America. Collectively, the selected airlines transported 81.15 percent of U.S. domestic passengers in 2016 and accounted for 99.88 percent of baggage fees and 99.98 percent of rebooking and cancellation fees charged by all U.S. airlines in 2016. During the course of our review, Alaska Air Group, which owns Alaska Airlines, purchased Virgin America. As a result, we eliminated Virgin America from our selection of airlines mid-way through our review and report on 11 selected airlines. We identified the types of optional services offered by the 11 selected airlines by reviewing webpages on airlines’ websites, which are required to prominently display optional services and fees. We accessed the airlines websites and took screen captures of the webpages with optional service fee information on March 31 and April 1, 2017, so that our analyses of the website content would be as comparable as possible. We also returned to airlines’ websites at later points to collect additional information. We compared the optional service fee information that we gathered from our review of airlines’ websites to optional service fee information that we collected as part of our 2010 review of airline fees to assess how these fees had changed since 2010. We corroborated information obtained from our review of airline’s websites through interviews with officials from 10 of the 11 selected airlines. We requested interviews with representatives from all 11 selected airlines but one airline declined to be interviewed. As a result we interviewed officials from 10 of the 11 selected airlines. In interviewing the airline officials, we used a semi-structured interview instrument, which contained questions pertaining to the types of optional service fees and bundled fare products that airlines have introduced since 2010, the factors that airlines consider when setting fees, and airlines’ views on advantages and disadvantages to consumers of unbundling optional services. During our interviews, we also asked airlines if they would be willing to share cost information on optional services with us, and all 10 airlines declined to provide us with such proprietary information; or said that they do not collect specific cost information on optional services; or said that they collect cost information only in limited circumstances, such as food and beverage costs, which would not have been useful for this report. In addition to answering our interview questions, we asked the 10 airlines to provide information on when they first began charging for specific optional services, how much those fees were at the time that they were first introduced, and how revenue from those fees are categorized and reported to BTS. Nine of the 10 airlines that we interviewed provided this information. One of the responsive airlines did not provide a complete response, but the overall responses were sufficiently detailed to address our objective. In addition to obtaining fee information, we analyzed airline financial information reported to BTS by airlines from calendar year 2010 through calendar year 2016—the most recent available—to analyze how revenue generated from optional service fees had changed from 2010 to 2016. We analyzed revenue data from baggage fees and reservation-change and cancellation fees, which are the two types of optional service-fee revenues that airlines are required to report to BTS in separate accounts. All other optional service-fee revenue is reported in accounts that include other airline revenue sources. We assessed the reliability of the BTS’s operating revenue data as discussed above and determined that they were reliable for the purpose of reporting overall trends in revenue from baggage and reservation-change and cancellation fees for 2010 through 2016. To identify the factors that airlines consider when setting optional service fees, we interviewed officials from the 10 selected U.S. airlines that agreed to speak with us about the factors airlines consider when deciding whether to separate optional service fees from the base fare price and determining how much to charge for a given optional service. We also interviewed selected aviation stakeholders that included three airline trade associations; four consumer groups; three global distribution systems (GDS) companies; a travel trade association representing GDSs, online travel agents, and metasearch companies; and two other industry stakeholders to obtain their views on factors that airlines consider when setting fees (see table 5). We selected the three airline trade associations that represent different airlines (i.e., domestic airlines, international airlines, and regional airlines). We selected four consumer group associations that represent a range of types of airline consumers (i.e., business travelers and leisure travelers) and that recently published articles on consumer transparency issues related to optional service fees. With regard to the GDSs, we selected the three largest GDS companies in the United States. To obtain the perspective of companies that provide information or sell airline tickets through indirect distribution methods, we also selected the travel trade association which represents GDSs, online travel agents, and metasearch companies. Finally, we selected two industry stakeholders because they have observed how the airline industry has changed since 2010 and cover the breadth of the airline industry. We also conducted an economic literature search for studies that have examined the effect of baggage fees on base ticket prices. The literature search was performed in March 2017, using keyword and controlled- vocabulary searches in bibliographic databases including Transportation Research International Documentation, Academic OneFile, Scopus, and WorldCat. The terms included, but were not limited to, keywords such as “airline” or “air carrier,” “baggage fees” or “ancillary fees,” “unbundling” or “de-bundling,” combined with “impact” or “affect” or “effect” and “ticket price” or “base fare” or “airline pricing.” We limited our search to studies published after 2010 because baggage fees were first introduced in 2008. The literature search generated 11 initial results. We vetted this initial list by examining the abstracts for those that addressed our objectives and by determining which studies had appeared in peer-reviewed journals. We identified 5 studies that met these criteria. After performing a secondary review of these studies to assess the soundness of the studies’ methodologies and to confirm the relevance to our objectives, we excluded 2 of these studies. This resulted in the following 3 studies being included in our report. Henrickson, Kevin E. and John Scott, 2012, “Baggage Fees and Changes in Airline Ticket Prices,” in James Peoples (ed.), Pricing Behavior and Non-Price Characteristics in the Airline Industry (Advances in Airline Economics, Volume 3) Emerald Group Publishing Limited, 177–192. Brueckner, Jan K., Darin N. Lee, Pierre M. Picard, and Ethan Singer, 2015, “Product Unbundling in the Travel Industry: The Economics of Airline Bag Fees,” Journal of Economics & Management Strategy 24 (3): 457. Scotti, D. and M. Dresner, 2015, “The Impact of Baggage Fees on Passenger Demand on US Air Routes,” Transport Policy 43: 4–10. We also conducted several literature searches using online resources to identify reports, studies, articles, or other publications that discussed the use of optional service fees in the airline industry. To determine what actions DOT has taken since 2010 to improve the transparency of optional service fees, we reviewed DOT regulations promulgated since 2010, such as regulations that establish requirements on the disclosure of optional service fees, the refunding policies of optional service fees, and post-purchase price increase limitations on baggage fees. In addition, we identified four DOT proposed rules and one request for information related to transparency of optional service fees and followed up with DOT regarding the status of these ongoing rulemakings and request for information. We also reviewed DOT guidance, directives, policies, and other documentation clarifying the requirements of various regulations and describing the roles and responsibilities of DOT’s Office of Aviation Enforcement and Proceedings (OAEP) and Aviation Consumer Protection Division (ACPD) related to monitoring, investigating, and enforcing airline compliance with regulations related to airline optional service fees. To understand how OAEP and ACPD monitor compliance with existing optional service fee regulations and respond to consumer complaints about optional service fees, we interviewed officials from OAEP and ACPD about existing regulations; the policies, compliance, and enforcement activities undertaken by these offices; and the process for responding to complaints. In addition, we also interviewed officials from BTS about their guidance and process for collecting optional service fee revenue data from airlines. As described above, we asked officials from the 10 airlines that agreed to be interviewed about how they report revenue from certain optional service fees to DOT’s BTS. We summarized this information and identified fees that airlines commonly reported in different categories. To understand how different stakeholders view actions DOT has taken to improve transparency of optional service fees, we interviewed stakeholders described in table 5. In addition, during our interviews with officials from the 10 selected U.S. airlines that agreed to speak with us, we obtained their views on DOT’s actions and obtained information about how their airlines comply with DOT regulations related to optional service fees. We also conducted an analysis of the contracts of carriage for all the 11 airlines in our selection. This helped us to corroborate information we obtained from our interviews with airline officials about their optional service refund policies. These contracts of carriage were all accessed and downloaded on March 14, 2017, so that our analyses of the contract- of-carriage content would be as comparable as possible. To assess the readability of the contracts of carriage, we converted the files to Microsoft Word documents and ran the Flesch-Kincaid Grade-Level test, which is included in the Microsoft Word software. Finally, we reviewed documents and interviewed officials from four selected foreign governments—the European Union, Canada, the United Kingdom, and Malaysia—that have taken actions to improve consumer transparency related to airline optional service fees. Specifically, we interviewed officials from the European Commission, Canadian Transportation Agency, United Kingdom Civil Aviation Authority, and the Malaysian Aviation Commission. We based our selection of foreign governments on various factors including whether the region or country has implemented or is considering implementing laws related to increasing consumer transparency of airline’s optional service fees, and recommendations from our interviews with DOT and industry officials and stakeholders. We interviewed officials about existing laws related to consumer transparency of optional service fees, how these laws are monitored and enforced, and the effects of these laws on the airline industry and airline consumers. In addition, we used these interviews to corroborate information obtained from our document reviews. We conducted this performance audit from November 2016 to September 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Faye Morrison (Assistant Director), Maria Mercado (Analyst in Charge), Namita Bhatia-Sabharwal, Lacey Coppage, Leia Dickerson, Michele Fejfar, Hannah Laufe, Malika Rice, Amy Rosewarne, and Eric Warren made key contributions to this report.
Since 2008, U.S passenger airlines have increasingly charged fees for optional services that were previously included in the price of a ticket, such as checked baggage or seat selection. Consumer advocates have raised questions about the transparency of these fees and their associated rules. In April 2011, DOT issued a final rule requiring, among other things, that certain U.S. and foreign airlines disclose information about optional service fees on their websites. GAO was asked to review issues related to optional service fees in the U.S. aviation industry. This report describes: (1) how selected U.S. airlines have modified their offering and pricing of optional services since 2010, (2) the factors that selected U.S. airlines consider when determining whether and how much to charge for optional services, and (3) actions DOT has taken since 2010 to improve the transparency of optional service fees and views of selected aviation stakeholders about these actions. GAO reviewed 2010 and 2017 airline data on optional services fees charged by the 11 largest U.S. passenger airlines; analyzed airline financial data from 2010 to 2016 reported to DOT; reviewed economic studies examining the effects of bag fees on fares; and reviewed applicable laws. GAO requested interviews with representatives of all the 11 selected U.S. airlines; 10 agreed to be interviewed and one airline declined. GAO also interviewed DOT officials, consumer advocates, and other aviation industry stakeholders. DOT reviewed a draft of this report and provided technical comments that GAO incorporated as appropriate. Since 2010, selected U.S. airlines have introduced a variety of new fees for optional services and increased some existing fees. For example, each of the 11 U.S. airlines that GAO examined introduced fees for “preferred” seating, which may include additional legroom or a seat closer to the front of the economy cabin. Some of these airlines have also introduced new fees for other optional services, such as fees for carry-on baggage and priority boarding. Since 2010, many of the selected airlines have also increased existing fees for some optional services, including fees for checked baggage and for changing or cancelling a reservation. From 2010 to 2016, U.S. airlines' revenues from these two fees—the only optional service fees for which revenues are separately reported to the Department of Transportation (DOT)—increased from $6.3 billion in 2010 to $7.1 billion in 2016 (in constant 2016 dollars). Airline officials cited competition from other airlines and customer demand, among other things, as factors they consider when deciding whether and how much to charge for optional services. According to officials from 9 of the 10 selected airlines GAO interviewed, the process of “unbundling” allows passengers to customize their flight by paying for only the services that they value. Airline officials said that charging fees for optional services allows the airlines to offer lower base airfares to customers. For customers traveling with bags, however, GAO's review of airline-related economic literature showed that on average customers who paid for at least one checked bag paid more in total for the airfare and bag fees than they did when airfares included checked baggage. Officials from the 10 airlines said they also consider customer demand and willingness to pay when setting prices for optional services, and officials from 8 of these airlines noted that competitors' prices for similar services are another factor used in determining the amount of fees. Since 2010, DOT has taken or has proposed a range of actions to improve the transparency of airlines' fees for optional services. These actions include: (1) monitoring and enforcing airlines' compliance with existing transparency regulations; (2) collecting, reviewing, and responding to consumers' complaints; (3) collecting additional data on revenue generated from fees; and (4) educating airlines and consumers about existing regulations and consumer rights related to optional service fees. Consumer and industry stakeholders, such as online travel agents' representatives, told GAO that DOT's regulations requiring certain airlines to disclose optional service fees on their websites have improved consumer transparency. However, these stakeholders also told GAO that there are additional transparency challenges, such as when consumers search for and book flights through online travel agents. Because optional services are not always available for purchase and because fees for such services are not always disclosed through online travel agents, these stakeholders argue that consumers are not always able to determine the full cost of their travel and compare costs across airlines before they purchase their tickets. While transparency challenges still exist, DOT has ongoing regulatory proceedings, some in response to prior GAO recommendations that may resolve some of these issues.
SARS is a respiratory illness that has recently been reported principally in Asia, Europe, and North America. The World Health Organization reported on May 5, 2003, that there were an estimated 6,583 probable cases reported in 27 countries, including 61 cases in the United States. There have been 461 deaths worldwide, none of which have been in the United States. Of the 56 probable cases in the United States reported through April 30, 2003, 37 (66 percent) were hospitalized, and 2 (4 percent) required mechanical ventilation. Symptoms of the disease, which may be caused by a previously unrecognized coronavirus, can include a fever, chills, headache, other body aches, or a dry cough. A Canadian official recently reported that more than 60 percent of probable SARS cases in Canada, where the bulk of North American cases have occurred, resulted from transmission to health care workers and patients. Canada’s experience with managing the SARS outbreak has shown that measures used to prevent and control emerging infectious diseases appear to have been useful in controlling this outbreak. One of the measures that it has undertaken to control the outbreak is isolating probable cases in hospitals, including closing two hospitals to new admissions. Other measures include isolating people, either in their homes or in a hospital, who have had close contact with a SARS patient and providing educational materials regarding SARS to people who have traveled to locations of concern. In order to be adequately prepared for a major public health threat such as SARS in the United States, state and local public health agencies need to have several basic capabilities, whether they possess them directly or have access to them through regional agreements. Public health departments need to have disease surveillance systems and epidemiologists to detect clusters of suspicious symptoms or diseases in order to facilitate early detection of disease and treatment of victims. Laboratories need to have adequate capacity and necessary staff to test clinical and environmental samples in order to identify an agent promptly so that proper treatment can be started and infectious diseases prevented from spreading. All organizations involved in the response must be able to communicate easily with one another as events unfold and critical information is acquired, especially in a large-scale infectious disease outbreak. In addition, plans that describe how state and local officials would manage and coordinate an emergency response need to be in place and to have been tested in an exercise, both at the state and local levels and at the regional level. Local health care organizations, including hospitals, are generally responsible for the initial response to a public health emergency. In the event of a large-scale infectious disease outbreak, hospitals and their emergency departments would be on the front line, and their personnel would take on the role of first responders. Because hospital emergency departments are open 24 hours a day, 7 days a week, exposed individuals would be likely to seek treatment from the medical staff on duty. Staff would need to be able to recognize and report any illness patterns or diagnostic clues that might indicate an unusual infectious disease outbreak to their state or local health department. Hospitals would need to have the capacity and staff necessary to treat severely ill patients and limit the spread of infectious disease. In addition, hospitals would need adequate stores of equipment and supplies, including medications, personal protective equipment, quarantine and isolation facilities, and air handling and filtration equipment. The federal government also has a role in preparedness for and response to major public health threats. It becomes involved in investigating the cause of the disease, as it is doing with SARS. In addition, the federal government provides funding and resources to state and local entities to support preparedness and response efforts. CDC’s Public Health Preparedness and Response for Bioterrorism program provided funding through cooperative agreements in fiscal year 2002 totaling $918 million to states and municipalities to improve bioterrorism preparedness and response, as well as other public health emergency preparedness activities. HRSA’s Bioterrorism Hospital Preparedness Program provided funding through cooperative agreements in fiscal year 2002 of approximately $125 million to states and municipalities to enhance the capacity of hospitals and associated health care entities to respond to bioterrorist attacks. In March 2003, HHS announced that the CDC and HRSA programs would provide funding of approximately $870 million and $498 million, respectively, for fiscal year 2003. Among the other public health emergency response resources that the federal government provides is the Strategic National Stockpile, which contains pharmaceuticals, antidotes, and medical supplies that can be delivered anywhere in the United States within 12 hours of the decision to deploy. Just as was true with the identification of the coronavirus as the likely causative agent in SARS, deciding which influenza viral strains are dominant depends on data collected from domestic and international surveillance systems that identify prevalent strains and characterize their effect on human health. Antiviral drugs and vaccines against influenza are expected to be in short supply if a pandemic occurs. Antiviral drugs, which can be used against all forms of viral diseases, have been as effective as vaccines in preventing illness from influenza and have the advantage of being available now. HHS assumes shortages of antiviral drugs and vaccines will occur in a pandemic because demand is expected to exceed current rates of production. For example, increasing production capacity of antiviral drugs can take at least 6 to 9 months, according to manufacturers. In the cities we visited, state and local officials reported varying levels of public health preparedness to respond to outbreaks of diseases such as SARS. They recognized gaps in preparedness elements such as communication and were beginning to address them. Gaps also remained in other preparedness elements that have been more difficult to address, including the disease surveillance and laboratory systems and the response capacity of the workforce. In addition, we found that the level of preparedness varied across the cities. Jurisdictions that had multiple prior experiences with public health emergencies were generally more prepared than those with little or no such experience prior to our site visits. We found that planning for regional coordination was lacking between states. In addition, states were working on plans for receiving and distributing the Strategic National Stockpile and for administering mass vaccinations. States and local areas were addressing gaps in public health preparedness elements, such as communication, but weaknesses remained in other preparedness elements, including the disease surveillance and laboratory systems and the response capacity of the workforce. Gaps in capacity often are not amenable to solution in the short term because either they require additional resources or the solution takes time to implement. We found that officials were beginning to address communication problems. For example, six of the seven cities we visited were examining how communication would take place in a public health emergency. Many cities had purchased communication systems that allow officials from different organizations to communicate with one another in real time. In addition, state and local health agencies were working with CDC to build the Health Alert Network (HAN), an information and communication system. The nationwide HAN program has provided funding to establish infrastructure at the local level to improve the collection and transmission of information related to public health preparedness. Goals of the HAN program include providing high-speed Internet connectivity, broadcast capacity for emergency communication, and distance-learning infrastructure for training. State and local officials for the cities we visited recognized and were attempting to address inadequacies in their surveillance systems and laboratory facilities. Local officials were concerned that their surveillance systems were inadequate to detect a bioterrorist event, and all of the states we visited were making efforts to improve their disease surveillance systems. Six of the cities we visited used a passive surveillance system to detect infectious disease outbreaks. However, passive systems may be inadequate to identify a rapidly spreading outbreak in its earliest and most manageable stage because, as officials in three states noted, there is chronic underreporting and a time lag between diagnosis of a condition and the health department’s receipt of the report. To improve disease surveillance, six of the states and two of the cities we visited were developing surveillance systems using electronic databases. Several cities were also evaluating the use of nontraditional data sources, such as pharmacy sales, to conduct surveillance. Three of the cities we visited were attempting to improve their surveillance capabilities by incorporating active surveillance components into their systems. However, work to improve surveillance systems has proved challenging. For example, despite initiatives to develop active surveillance systems, the officials in one city considered event detection to be a weakness in their system, in part because they did not have authority to access hospital information systems. In addition, various local public health officials in other cities reported that they lacked the resources to sustain active surveillance. Officials from all of the states we visited reported problems with their public health laboratory systems and said that they needed to be upgraded. All states were planning to purchase the equipment necessary for rapidly identifying a biological agent. State and local officials in most of the areas that we visited told us that the public health laboratory systems in their states were stressed, in some cases severely, by the sudden and significant increases in workload during the anthrax incidents in the fall of 2001. During these incidents, the demand for laboratory testing was significant even in states where no anthrax was found and affected the ability of the laboratories to perform their routine public health functions. Following the incidents, over 70,000 suspected anthrax samples were tested in laboratories across the country. Officials in the states we visited were working on other solutions to their laboratory problems. States were examining various ways to manage peak loads, including entering into agreements with other states to provide surge capacity, incorporating clinical laboratories into cooperative laboratory systems, and purchasing new equipment. One state was working to alleviate its laboratory problems by upgrading two local public health laboratories to enable them to process samples of more dangerous pathogens and by establishing agreements with other states to provide backup capacity. Another state reported that it was using the funding from CDC to increase the number of pathogens the state laboratory could diagnose. The state also reported that it has worked to identify laboratories in adjacent states that are capable of being reached within 3 hours over surface roads. In addition, all of the states reported that their laboratory response plans had been revised to cover reporting and sharing laboratory results with local public health and law enforcement agencies. At the time of our site visits, shortages in personnel existed in state and local public health departments and laboratories and were difficult to remedy. Officials from state and local health departments told us that staffing shortages were a major concern. Two of the states and cities that we visited were particularly concerned that they did not have enough epidemiologists to do the appropriate investigations in an emergency. One state department of public health we visited had lost approximately one- third of its staff because of budget cuts over the past decade. This department had been attempting to hire more epidemiologists. Barriers to finding and hiring epidemiologists included noncompetitive salaries and a general shortage of people with the necessary skills. Shortages in laboratory personnel were also cited. Officials in one city noted that they had difficulty filling and maintaining laboratory positions. People that accepted the positions often left the health department for better-paying positions. Increased funding for hiring staff cannot necessarily solve these shortages in the near term because for many types of laboratory positions there are not enough trained individuals in the workforce. According to the Association of Public Health Laboratories, training laboratory personnel to provide them with the necessary skills will take time and require a strategy for building the needed workforce. We found that the overall level of public health preparedness varied by city. In the cities we visited, we observed that those cities that had recurring experience with public health emergencies, including those resulting from natural disasters, or with preparation for National Security Special Events, such as political conventions, were generally more prepared than cities with little or no such experience. Cities that had dealt with multiple public health emergencies in the past might have been further along because they had learned which organizations and officials need to be involved in preparedness and response efforts and moved to include all pertinent parties in the efforts. Experience with natural disasters raised the awareness of local officials regarding the level of public health emergency preparedness in their cities and the kinds of preparedness problems they needed to address. Even the cities that were better prepared were not strong in all elements. For example, one city reported that communications had been effective during public health emergencies and that the city had an active disease surveillance system. However, officials reported gaps in laboratory capacity. Another one of the better-prepared cities was connected to HAN and the Epidemic Information Exchange (Epi-X), and all county emergency management agencies in the state were linked. However, the state did not have written agreements with its neighboring states for responding to a public health emergency. Response organization officials were concerned about a lack of planning for regional coordination between states of the public health response to an infectious disease outbreak. As called for by the guidance for the CDC and HRSA funding, all of the states we visited organized their planning on the basis of regions within their states, assigning local areas to particular regions for planning purposes. A concern for response organization officials was the lack of planning for regional coordination between states. A hospital official in one city we visited said that state lines presented a “real wall” for planning purposes. Hospital officials in one state reported that they had no agreements with other states to share physicians. However, one local official reported that he had been discussing these issues and had drafted mutual aid agreements for hospitals and emergency medical services. Public health officials from several states reported developing working relationships with officials from other states to provide backup laboratory capacity. States have begun planning for use of the Strategic National Stockpile. To determine eligibility for the CDC funding, applicants were required to develop interim plans to receive and manage items from the stockpile, including mass distribution of antibiotics, vaccines, and medical materiel. However, having plans for the acceptance of the deliveries from the stockpile is not enough. Plans have to include details about dividing the materials that are delivered in large pallets and distributing the medications and vaccines. Of the seven states we visited, five states had completed plans for the receipt and distribution of items from the stockpile. One state that was working on its plan stated that it would be completed in January 2003. Only one state had conducted exercises of its stockpile distribution plan, while the other states were planning to conduct exercises or drills of their plans sometime in 2003. In addition, five states reported on their plans for mass vaccinations and seven states reported on their plans for large-scale administration of smallpox vaccine in response to an outbreak. Some states we visited had completed plans for mass vaccinations, whereas other states were still developing their plans. The mass vaccination plans were generally closely tied to the plans for receiving and administering the stockpile. In addition, two states had completed smallpox response plans, which include plans for administering mass smallpox vaccinations to the general population, whereas four of the other states were drafting plans. The remaining state was discussing such a plan. However, only one of the states we visited has tested in an exercise its plan for conducting mass smallpox vaccinations. We found that most hospitals lack the capacity to respond to large-scale infectious disease outbreaks. Persons with symptoms of infectious disease would potentially go to emergency departments for treatment. Most emergency departments across the country have experienced some degree of crowding and therefore in some cases may not be able to handle a large influx of patients during a potential SARS outbreak. In addition, although most hospitals across the country reported participating in basic planning activities for large-scale infectious disease outbreaks, few have acquired the medical equipment resources, such as ventilators, to handle large increases in the number of patients that may result from outbreaks of diseases such as SARS. Our survey found that most emergency departments have experienced some degree of overcrowding. Persons with symptoms of infectious disease would potentially go to emergency departments for treatment, further stressing these facilities. The problem of overcrowding is much more pronounced in some hospitals and areas than in others. In general, hospitals that reported the most problems with crowding were in the largest metropolitan statistical areas (MSA) and in the MSAs with high population growth. For example, in fiscal year 2001, hospitals in MSAs with populations of 2.5 million or more had about 162 hours of diversion (an indicator of crowding), compared with about 9 hours for hospitals in MSAs with populations of less than 1 million. Also the median number of hours of diversion in fiscal year 2001 for hospitals in MSAs with a high percentage population growth was about five times that for hospitals in MSAs with lower percentage population growth. Diversion varies greatly by MSA. Figure 1 shows each MSA and the share of hospitals within the MSA that reported being on diversion more than 10 percent of the time—or about 2.4 hours or more per day—in fiscal year 2001. Areas with the greatest diversion included Southern California and parts of the Northeast. Of the 248 MSAs for which data were available, 171 (69 percent) had no hospitals reporting being on diversion more than 10 percent of the time. By contrast, 53 MSAs (21 percent) had at least one- quarter of responding hospitals on diversion for more than 10 percent of the time. Hospitals in the largest MSAs and in MSAs with high population growth that have reported crowding in emergency departments may have difficulty handling a large influx of patients during a potential SARS outbreak, especially if this outbreak occurred in the winter months when the incidence of influenza is quite high. Thus far, the largest SARS outbreaks worldwide have primarily occurred in areas with dense populations. At the time of our site visits, we found that hospitals were beginning to coordinate with other local response organizations and collaborate with each other in local planning efforts. Hospital officials in one city we visited told us that until September 11, 2001, hospitals were not seen as part of a response to a terrorist event but that city officials had come to realize that the first responders to a bioterrorism incident could be a hospital’s medical staff. Officials from the state began to emphasize the need for a local approach to hospital preparedness. They said, however, that it was difficult to impress the importance of cooperation on hospitals because hospitals had not seen themselves as part of a local response system. The local government officials were asking them to create plans that integrated the city’s hospitals and addressed such issues as off-site triage of patients and off-site acute care. In our survey of over 2,000 hospitals, 4 out of 5 hospitals reported having a written emergency response plan for large-scale infectious disease outbreaks. Of the hospitals with emergency response plans, most include a description of how to achieve surge capacity for obtaining additional pharmaceuticals, other supplies, and staff. In addition, almost all hospitals reported participating in community interagency disaster preparedness committees. Our survey showed that hospitals have provided training to staff on biological agents, but fewer than half have participated in exercises related to bioterrorism. Most hospitals we surveyed reported providing training about identifying and diagnosing symptoms for the six biological agents identified by the CDC as most likely to be used in a bioterrorist attack. At least 90 percent of hospitals reported providing training for two of these agents—smallpox and anthrax—and approximately three-fourths of hospitals reported providing training about the other four—plague, botulism, tularemia, and hemorrhagic fever viruses. Most hospitals lack adequate equipment, isolation facilities, and staff to treat a large increase in the number of patients for an infectious disease such as SARS. To prevent transmission of SARS in health care settings, CDC recommends that health care workers use personal protective equipment, including gowns, gloves, respirators, and protective eyewear. SARS patients in the United States are being isolated until they are no longer infectious. CDC estimates that patients require mechanical ventilation in 10 to 20 percent of SARS cases. In the seven cities we visited, hospital, state, and local officials reported that hospitals needed additional equipment and capital improvements— including medical stockpiles, personal protective equipment, quarantine and isolation facilities, and air handling and filtering equipment—to enhance preparedness. Five of the states we visited reported shortages of hospital medical staff, including nurses and physicians, necessary to increase response capacity in an emergency. One of the states we visited reported that only 11 percent of its hospitals could readily increase their capacity for treating patients with infectious diseases requiring isolation, such as smallpox and SARS. Another state reported that most of its hospitals have little or no capacity for isolating patients diagnosed with or being tested for infectious diseases. According to our hospital survey, availability of medical equipment varied greatly between hospitals, and few hospitals seemed to have adequate equipment and supplies to handle a large-scale infectious disease outbreak. While most hospitals had, for every 100 staffed beds, at least 1 ventilator, 1 personal protective equipment suit, or 1 isolation bed, half of the hospitals had, for every 100 staffed beds, fewer than 6 ventilators, 3 or fewer personal protective equipment suits, and fewer than 4 isolation beds. The completion of final federal influenza pandemic response plans that address the problems related to the purchase, distribution, and administration of supplies of vaccines and antiviral drugs during a pandemic could facilitate the public health response to emerging infectious disease outbreaks. CDC has provided interim draft guidance to facilitate state plans but has not made the final decisions on plan provisions necessary to mitigate the effects of potential shortages of vaccines and antiviral drugs. Until such decisions are made, the timeliness and adequacy of response efforts may be compromised. In the most recent version of its pandemic influenza planning guidance for states, CDC lists several key federal decisions related to vaccines and antiviral drugs that have not been made. These decisions include determining the amount of vaccines and antiviral drugs that will be purchased at the federal level; the division of responsibility between the public and the private sectors for the purchase, distribution, and administration of vaccines and drugs; and how population groups will be prioritized and targeted to receive limited supplies of vaccines and drugs. In each of these areas, until federal decisions are made, states will not be able to develop strategies consistent with federal action. The interim draft guidance for state pandemic plans says that resources can be expected to be available through federal contracts to purchase influenza vaccine and some antiviral agents, but some state funding may be required. The amounts of antiviral drugs to be purchased and stockpiled are yet to be determined, even though these drugs are available and can potentially be used for both treatment and prevention during a pandemic. CDC has indicated in its interim draft guidance that the policies for purchasing, distributing, and administering vaccines and drugs by the private and public sectors will change during a pandemic, but some decisions necessary to prepare for these expected changes have not been made. During a typical annual influenza response, influenza vaccine and antiviral drug distribution is primarily handled directly by manufacturers through private vendors and pharmacies to health care providers. During a pandemic, however, CDC interim draft guidance indicates that many of these private-sector responsibilities may be transferred to the public sector at the federal, state, or local levels and that priority groups within the population would need to be established for receiving limited supplies of vaccines and drugs. State officials are particularly concerned that a national plan has not been issued with final recommendations for how population groups should be prioritized to receive vaccines and antiviral drugs. In its interim draft guidance, CDC lists eight population groups that should be considered in establishing priorities among groups for receiving vaccines and drugs during a pandemic. The list includes such groups as health care workers and public health personnel involved in the pandemic response, persons traditionally considered to be at increased risk of severe influenza illness and mortality, and preschool and school-aged children. Although state officials acknowledge the need for flexibility in planning because many aspects of a pandemic cannot be known in advance, the absence of more detail leaves them uncertain about how to plan for the use of limited supplies of vaccine and drugs. In our 2000 report on the influenza pandemic, we recommended that HHS determine the capability of the private and public sectors to produce, distribute, and administer vaccines and drugs and complete the national response plan. To date, only limited progress has been made in addressing these recommendations. Many actions taken at the state and local level to prepare for a bioterrorist event have enhanced the ability of state and local response agencies and organizations to manage an outbreak of an infectious disease such as SARS. However, there are significant gaps in public health surveillance systems and laboratory capacity, and the number of personnel trained for disease detection is insufficient. Most emergency departments across the country have experienced some degree of overcrowding. Hospitals have begun planning and training efforts to respond to large-scale infectious disease outbreaks, but many hospitals lack adequate equipment, medical stockpiles, personal protective equipment, and quarantine and isolation facilities. Federal and state plans for the purchase, distribution, and administration of supplies of vaccines and drugs in response to an influenza pandemic have still not been finalized. The lack of these final plans has serious implications for efforts to mobilize the distribution of vaccines and drugs for other infectious disease outbreaks. Mr. Chairman, this completes my prepared statement. I would be happy to respond to any questions you or other Members of the Subcommittee may have at this time. For further information about this testimony, please contact me at (202) 512-7119. Robert Copeland, Marcia Crosse, Martin T. Gahart, Deborah Miller, Roseanne Price, and Ann Tynan also made key contributions to this statement. Smallpox Vaccination: Implementation of National Program Faces Challenges. GAO-03-578. Washington, D.C.: April 30, 2003. Infectious Disease Outbreaks: Bioterrorism Preparedness Efforts Have Improved Public Health Response Capacity, but Gaps Remain. GAO-03- 654T. Washington, D.C.: April 9, 2003. Bioterrorism: Preparedness Varied across State and Local Jurisdictions. GAO-03-373. Washington, D.C.: April 7, 2003. Hospital Emergency Departments: Crowded Conditions Vary among Hospitals and Communities. GAO-03-460. Washington, D.C.: March 14, 2003. Homeland Security: New Department Could Improve Coordination but Transferring Control of Certain Public Health Programs Raises Concerns. GAO-02-954T. Washington, D.C.: July 16, 2002. Homeland Security: New Department Could Improve Biomedical R&D Coordination but May Disrupt Dual-Purpose Efforts. GAO-02-924T. Washington, D.C.: July 9, 2002. Homeland Security: New Department Could Improve Coordination but May Complicate Priority Setting. GAO-02-893T. Washington, D.C.: June 28, 2002. Homeland Security: New Department Could Improve Coordination but May Complicate Public Health Priority Setting. GAO-02-883T. Washington, D.C.: June 25, 2002. Bioterrorism: The Centers for Disease Control and Prevention’s Role in Public Health Protection. GAO-02-235T. Washington, D.C.: November 15, 2001. Bioterrorism: Review of Public Health Preparedness Programs. GAO-02- 149T. Washington, D.C.: October 10, 2001. Bioterrorism: Public Health and Medical Preparedness. GAO-02-141T. Washington, D.C.: October 9, 2001. Bioterrorism: Coordination and Preparedness. GAO-02-129T. Washington, D.C.: October 5, 2001. Bioterrorism: Federal Research and Preparedness Activities. GAO-01- 915. Washington, D.C.: September 28, 2001. West Nile Virus Outbreak: Lessons for Public Health Preparedness. GAO/HEHS-00-180. Washington, D.C.: September 11, 2000. Combating Terrorism: Need for Comprehensive Threat and Risk Assessments of Chemical and Biological Attacks. GAO/NSIAD-99-163. Washington, D.C.: September 14, 1999. Combating Terrorism: Observations on Biological Terrorism and Public Health Initiatives. GAO/T-NSIAD-99-112. Washington, D.C.: March 16, 1999. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
SARS has infected relatively few people nationwide, but it has raised concerns about preparedness for large-scale infectious disease outbreaks. The initial response to an outbreak occurs in local agencies and hospitals, with support from state and federal agencies, and can involve disease surveillance, epidemiologic investigation, health care delivery, and quarantine management. Officials have learned lessons applicable to preparedness for such outbreaks from experiences with other major public health threats. GAO was asked to examine the preparedness of state and local public health agencies and hospitals for responding to a large-scale infectious disease outbreak and the relationship of federal and state planning for an influenza pandemic to preparedness for emerging infectious diseases. This testimony is based on Bioterrorism: Preparedness Varied across State and Local Jurisdictions, GAO-03-373 (Apr. 7, 2003); findings from a GAO survey on hospital emergency room capacity (in Hospital Emergency Departments: Crowded Conditions Vary among Hospitals and Communities, GAO-03-460 (Mar. 14, 2003)) and on hospital emergency preparedness; and information updating Influenza Pandemic: Plan Needed for Federal and State Response, GAO-01-4 (Oct. 27, 2000). The efforts of public health agencies and health care organizations to increase their preparedness for major public health threats such as bioterrorism and the worldwide influenza outbreaks known as pandemics have improved the nation's capacity to respond to SARS and other emerging infectious disease outbreaks, but gaps in preparedness remain. Specifically, GAO found that there are gaps in disease surveillance systems and laboratory facilities and that there are workforce shortages. The level of preparedness varied across seven cities GAO visited, with jurisdictions that have had multiple prior experiences with public health emergencies being generally more prepared than others. GAO found that planning for regional coordination was lacking between states. GAO also found that states were developing plans for receiving and distributing medical supplies for emergencies and for mass vaccinations in the event of a public health emergency. GAO found that most hospitals lack the capacity to respond to large-scale infectious disease outbreaks. Most emergency departments have experienced some degree of crowding and therefore in some cases may not be able to handle a large influx of patients during a potential SARS or other infectious disease outbreak. Most hospitals across the country reported participating in basic planning activities for such outbreaks. However, few hospitals have adequate medical equipment, such as the ventilators that are often needed for respiratory infections such as SARS, to handle the large increases in the number of patients that may result. The public health response to outbreaks of emerging infectious diseases such as SARS could be improved by the completion of federal and state influenza pandemic response plans that address problems related to the purchase, distribution, and administration of supplies of vaccines and antiviral drugs during an outbreak. The Centers for Disease Control and Prevention has provided interim draft guidance to facilitate state plans but has not made the final decisions on plan provisions necessary to mitigate the effects of potential shortages of vaccines and antiviral drugs in the event of an influenza pandemic.
Most Americans obtain their health insurance coverage through the workplace. Employers typically offer health insurance coverage for employees on an annual basis through one or more health plans. Each plan year, employers can decide how many health plans to offer, whether to include coverage for MH/SU in the health plans offered, and what type of benefits those plans can include as part of their coverage.,Additionally, employers may determine if their plans’ MH/SU benefits will be managed by the same health insurer that manages their medical/surgical benefits, or if they will be managed by a separate organization that specializes in MH/SU benefits—known as a managed behavioral health organization (MBHO). Within the coverage of MH/SU that employers may offer, the types of MH/SU treatment services and the settings in which MH/SU treatment services are provided vary widely, so that a patient may receive care appropriate to the severity of the symptoms. Types of MH/SU services can include: counseling, case management, partial hospitalization, inpatient treatment, vocational rehabilitation, and a variety of residential programs. MH/SU treatment may also include prescription drugs. In addition, patients with acute symptoms may be treated by personnel in emergency rooms and hospital units, and by MH/SU crisis and outreach specialists. Patients with more subacute symptoms are treated by personnel in hospitals, day treatment programs, mental health center programs, and by different types of individual practitioners. Patients with long-term symptoms are often treated in mental health centers, residential units, and practitioners’ offices. reimbursing some portion of the remaining charges. The payment is made after the deductible is met and until the out-of-pocket expense maximum is reached—that is, the maximum amount that enrollees have to pay per year for all covered medical expenses. Coinsurance: A percentage payment made by enrollees after the deductible is met and until the out-of-pocket expense maximum is reached. Prior to the implementation of MHPAEA, private health insurance plans offered through employers that covered MH/SU typically provided lower levels of coverage for the treatment of these illnesses than for the treatment of physical illnesses. Employers often limited the coverage of MH/SU through the use of plan design features that were more restrictive for MH/SU benefits than for medical/surgical benefits. Prior to MHPAEA, MH/SU benefits were commonly subject to higher cost-sharing features such as deductibles, copayments, or coinsurance; more restrictive treatment limitations such as the number of covered hospital days or outpatient office visits; and limited out-of-network providers. Also, there were concerns that employers would limit the MH/SU treatment enrollees could receive by excluding specific MH/SU diagnoses, such as eating disorders, from their benefits. For example, prior to MHPAEA, an employer’s plan could cover unlimited hospital days and outpatient office visits and require 20 percent coinsurance for outpatient office visits for medical/surgical treatment while, for MH/SU, that same plan could cover only 30 hospital days and 20 outpatient office visits per year and impose 50 percent coinsurance for outpatient office visits. Additionally, an employer’s plan might limit the MH/SU diagnoses for which treatment was covered. Employers provided more limited coverage of MH/SU prior to MHPAEA primarily because of concerns about the cost of providing coverage for individuals with MH/SU. Concerns about the high costs associated with long-term, intensive psychotherapy and extended hospital stays, particularly for some diagnoses such as schizophrenia or major depression, could have prompted employers to impose treatment limitations on outpatient office visits and hospital days, and limits on annual or lifetime dollar amounts for treatment of MH/SU. To help address the discrepancies in health care coverage between mental illnesses and physical illnesses, Congress passed MHPAEA which strengthened federal parity requirements. coverage terms for MH/SU—when those services are offered—be no more restrictive than coverage terms for medical/surgical services. Under MHPAEA, employers are not required to offer MH/SU coverage. However, those plans that do offer mental health or substance use disorder coverage were required to comply with MHPAEA’s parity requirements for their health plan year that began on or after October 3, 2009. States may also pass laws requiring that mental health coverage sold in the state be offered on par with medical/surgical, and these requirements may be more stringent than those required by federal law. According to the National Conference of State Legislatures, state parity laws regulating mental health coverage have been passed in 49 states and the District of Columbia as of May 2011. See National Conference of State Legislatures, “State Laws Mandating or Regulating Mental Health Benefits” (Washington, D.C.: May 2011), accessed June 13, 2011, http://www.ncsl.org/default.aspx?tabid=14352. for coverage of MH/SU, including classifications of benefits and nonquantitative treatment limitations (NQTL). The IFR specifies six classifications of benefits within which parity must be applied: (1) inpatient, in-network; (2) inpatient, out-of-network; (3) outpatient, in-network; (4) outpatient, out-of-network; (5) emergency care; and (6) prescription drugs. The IFR further specifies that plans choosing to cover MH/SU benefits must offer the MH/SU benefits within any one classification when medical/surgical benefits are offered at that same classification. Thus, for plans that cover MH/SU benefits, if medical/surgical services are covered for in-patient, out-of-network care, the plan must also cover MH/SU services for in-patient, out-of-network care. An NQTL is a treatment limitation that is not expressed numerically but still limits the scope or duration of benefits for treatment under a health plan. Examples of NQTLs, some of which are noted in the IFR include: standards for provider admission to participate in a network; plan methods for determining usual, customary, and reasonable charges; pre- authorization of services; and utilization review. The IFR stipulates that employers must ensure that NQTLs are comparable across benefit classifications. Generally, if an NQTL is used for MH/SU services within a classification, it is to be applied no more stringently than an NQTL for medical/surgical services within that same classification. Most employers that responded to our survey continued to offer coverage of MH/SU through private insurance plans following the implementation of MHPAEA. The types of diagnoses and treatments included in employers’ MH/SU benefits remained largely unchanged, and some employers enhanced their MH/SU benefits by removing coverage limits as a result of MHPAEA requirements. After the issuance of the final regulations implementing MHPAEA, employers may make additional changes to their MH/SU benefits. Most employers that responded to our survey offered coverage of MH/SU both in their most current plan year—2011 or 2010—and in 2008, before MHPAEA was passed. Of the employers that responded to our survey about their coverage of MH/SU for both plan years, about 96 percent offered coverage for MH/SU for the current plan year and for 2008. Approximately 2 percent of employers reported that they offered coverage for only mental health conditions in 2008 but not substance use disorders, and continued to offer coverage for only mental health conditions in the current plan year. Conversely, a small percentage of employers—about 2 percent of those employers that responded to our survey about their coverage of MH/SU for both plan years—reported discontinuing their coverage of both MH/SU or only substance use disorders in the current plan year. One employer that discontinued offering coverage of mental health reported that it did so to control health insurance costs. Another employer reported that it ceased to offer coverage of substance use disorders because it did not want to cover these disorders without treatment limitations. Under MHPAEA, if substance use disorders are covered, any treatment limitations for the substance use benefits must be used on par with those used in medical/surgical benefits. Published employer surveys also reported that few employers discontinued coverage of MH/SU since MHPAEA was passed. According to Kaiser/HRET’s Employer Health Benefits 2010 Annual Survey, less than 2 percent of employers reported eliminating coverage for MH/SU as a result of MHPAEA. Mercer reported in its National Survey of Employer-Sponsored Health Plans that the percentage of employers surveyed that reported offering coverage for MH/SU was consistent from 2008 to 2010. Specifically, about 90 percent of employers surveyed in 2008 and 92 percent of employers surveyed in 2010 reported offering coverage for MH/SU. According to both Mercer’s 2008 survey and 2010 survey, offering coverage of MH/SU was most common among employers with 500 or more employees, at about 97 percent. Additionally, about 90 percent of employers with fewer than 500 employees surveyed in 2008 and 92 percent of employers with fewer than 500 employees surveyed in 2010 indicated that they offered coverage for MH/SU. Agency officials also told us that based on their review of trend data and information on employer’s coverage of MH/SU, employers appeared to continue to offer coverage of MH/SU since MHPAEA was passed. In addition, representatives from large insurance companies, a health benefits consulting firm, and an MBHO told us that most employers with whom they interact continued to offer coverage of MH/SU since MHPAEA was passed. According to other health benefits experts, most employers they knew of generally experienced minimal challenges in complying with the MHPAEA requirements. Representatives from medium, large, and very large employers with whom we spoke told us that the process for making changes to their health plans to comply with MHPAEA was relatively easy for them because they relied on their insurance brokers or health benefits consultants to inform them of the requirements and assist them in making necessary changes. Employers have not substantially changed the diagnoses and treatments that are included in their MH/SU benefits. However, fewer employers reported excluding at least one broad MH/SU diagnosis and more employers reported excluding a treatment related to MH/SU in the current plan year than for 2008. Some employers enhanced their MH/SU benefits by removing coverage limits and modifying cost-sharing for MH/SU in response to MHPAEA requirements. The types of MH/SU diagnoses included and excluded from employers’ MH/SU benefits remained consistent between the current plan year and 2008. About 91 percent of employers that responded to the question in our survey about the diagnoses included in their MH/SU benefits for both the current plan year and 2008 plan year reported their MH/SU benefits included the same broad diagnoses in their most popular health plan in the current plan year and in 2008. The other 9 percent of employers reported including more broad diagnoses in their MH/SU benefits for the current plan year than in the 2008 plan year. Most employers that provided information about diagnoses included in MH/SU benefits for both years reported that they included all types of broad mental health diagnoses in their MH/SU benefits for both plan years. Five of these broad diagnoses were covered by over 90 percent of employers for both the current plan year and 2008—mental disorders due to a general medical condition, substance-related disorders, schizophrenia and other psychotic disorders, mood disorders, and anxiety disorders (see fig. 1). Of the employers that responded to our survey question about the diagnoses included in their MH/SU benefits for both the current plan year and 2008 plan year, 34 percent reported that their most popular plan in their current plan year excluded at least one broad MH/SU diagnosis from their benefits, and 39 percent reported this for the 2008 plan year. Approximately 9 percent of employers that answered detailed benefits questions in our survey reported that their most popular plan for the current plan year excluded at least one specific mental health diagnosis subcategory within a broader mental health diagnosis and 2 percent excluded at least one specific substance use disorder subcategory. Similarly, approximately 10 percent reported excluding at least one specific mental health diagnosis subcategory and 2 percent excluded at least one specific substance use disorder subcategory for the 2008 plan year. Examples of specific diagnosis subcategories excluded by our survey respondents included developmental disorders, learning disorders, mental retardation, sexual deviation and dysfunction, and relational disorders, such as marriage or family problems. Similarly, according to Mercer’s 2010 National Survey of Employer- Sponsored Health Plans, 1 percent of employers with 500 or more employees and less than 1 percent of employers with fewer than 500 employees reported excluding additional diagnoses from their MH/SU benefits as a result of MHPAEA. Representatives from a large health insurer, a health benefits consulting firm, an insurance broker organization, and an advocacy group also reported that employers with whom they interact generally included the same number and type of diagnoses in their MH/SU benefits for the current plan year as they did prior to MHPAEA’s implementation. In addition to exclusions of diagnoses, some employers also choose to exclude specific treatments from their MH/SU benefits. Of the employers that responded to the question in our survey about excluding a specific treatment for MH/SU, approximately 41 percent reported excluding a specific treatment for MH/SU from their most popular health plan in the current plan year, while 33 percent reported doing so for their most popular health plan in the 2008 plan year. According to representatives from an advocacy organization and an institution that conducts employer-based surveys on health insurance coverage, some employers choose to exclude specific treatments related to certain MH/SU diagnoses from their MH/SU benefits than to exclude the diagnosis itself. For example, representatives from an MBHO, a health benefits consulting firm, and an institution that conducts employer- based surveys on health insurance coverage told us that employers may exclude the treatment of “applied behavioral analysis” for autism, citing concerns about the treatment’s effectiveness, rather than excluding coverage for autism. The most common change to MH/SU benefits reported among those that responded to our survey was enhancing benefits through the removal of treatment limitations, such as the number of allowed office visits or inpatient days. About 7 percent of employers that answered detailed benefits questions in our survey reported limits on the number of allowed office visits for mental health conditions in the current plan year, compared to 35 percent in 2008; and 9 percent reported limits on the number of allowed inpatient days for treatment of mental health conditions, compared to 29 percent in 2008. Similarly, 8 percent of employers that answered detailed benefits questions in our survey reported limits on the number of allowed office visits for substance use disorders, compared to 33 percent in 2008; and 8 percent reported limits on the number of allowed inpatient days for treatment of substance use disorders, compared to 27 percent in 2008 (see fig. 2). Reported use of lifetime dollar limits on MH/SU treatments also declined from 2008 to the current plan year.answered detailed benefits questions in our survey reported lifetime dollar About 5 percent of employers that limits on treatments for MH/SU for the current plan year, compared to 20 percent in 2008. Employers that reported lifetime dollar limits on mental health treatments for the current plan year generally told us that these limits applied to all treatments for MH/SU or that they applied to all treatments covered by the plan—including both MH/SU and medical/surgical. Kaiser/HRET’s Employer Health Benefits 2010 Annual Survey reported that of the 31 percent of employers surveyed that made changes in their mental health benefits as a result of MHPAEA, two-thirds of these employers reported eliminating coverage limits on mental health treatments, the most common change made by employers. Mercer’s 2010 National Survey of Employer-Sponsored Health Plans also found that the elimination of treatment limitations and annual or lifetime dollar limits were common changes made by employers, reporting that 35 percent of employers with 500 or more employees and 15 percent of employers surveyed with fewer than 500 employees removed limits on the number of allowed office visits or dollar limits in response to parity requirements. Several experts with whom we spoke told us that it was common for employers to eliminate treatment limitations and annual or lifetime dollar limits for MH/SU in response to parity requirements.representatives from an insurance broker organization and a trade association told us that employers with which they interacted removed limits on the number of allowed office visits for mental health conditions from their plans. A representative from a large insurance company told us that the employers with whom they work removed all limits on the number of allowed inpatient hospital days from plans to which MHPAEA applies, and a representative from an insurance broker organization also reported that employers with whom they consulted removed lifetime dollar limits on substance use disorders from their plans. Among employers who reported information on cost-sharing, copayments and coinsurance amounts for office visits with in-network providers generally stayed about the same, fluctuating minimally from 2008 to the current plan year, while copayments and coinsurance amounts for outpatient services with in-network providers decreased slightly from 2008 to the current plan year (see table 1). Mercer’s 2010 National Survey of Employer-Sponsored Health Plans found that 3 percent of employers surveyed decreased their cost-sharing requirements for MH/SU in response to MHPAEA, and larger employers were more likely to change their cost-sharing requirements than smaller employers. Specifically, according to Mercer, 20 percent of employers with 20,000 or more employees and 6 percent of employers with 500 to 999 employees reported decreasing their MH/SU copayments or coinsurance to comply with MHPAEA. Employers may continue to modify certain nonfinancial requirements— such as changes to the services they cover (the scope of services) and NQTLs—in their MH/SU benefits in response to agencies’ issuance of final implementing regulations for MHPAEA. Agency officials reported that the final regulations may provide additional detail on the required scope of services and on using NQTLs. The IFR does not specifically address the scope of services offered within each classification of benefits, and agency officials recognize that achieving parity in coverage is complicated by the fact that not all treatments or treatment settings for MH/SU correspond well to those for medical/surgical. Some commenters requested clarification about whether an employer would be required to cover a particular treatment or treatment setting for a mental health condition or substance use disorder that is otherwise covered in a plan, if benefits for the treatment or treatment settings are not provided for medical/surgical conditions—for example, counseling, an outpatient service used for treatment of MH/SU but not medical/surgical. As part of its issuance of the IFR, the agencies requested public comments on whether, and to what extent, the final regulations should address the scope of services provided by a group health plan or health insurance coverage. Agency officials from HHS’s Office of the Assistant Secretary for Planning and Evaluation (ASPE) and DOL are conducting research on the costs to employers that are associated with scope of services for MH/SU and intend to use the results to inform potential final regulations on the issue. Experts reported that some employers are unclear what types of services for MH/SU they must offer within the IFR’s six classifications to be in compliance with MHPAEA and its implementing regulations. These employers may modify their MH/SU benefits in response to the final regulations. As part of the process of developing final regulations, DOL, HHS, and Treasury are researching NQTLs for MH/SU, including convening a panel of experts to discuss how health plans use NQTLs—for example, use of pre-authorization for MH/SU benefits within certain classifications, as compared to use of pre-authorization for medical/surgical benefits within the same classification. The agencies may use this research to provide more detailed guidelines on how NQTLs for MH/SU services can be used on par with NQTLs used for medical/surgical services. Currently, the IFR does not specify the steps employers can take to achieve parity with NQTLs across classifications for coverage of MH/SU and medical/surgical services. For example, the IFR generally requires that any processes or other factors used in applying the NQTLs should be “comparable to” and used “no more stringently” for MH/SU benefits in a certain classification than they are for medical/surgical benefits at that same classification, but these qualitative terms may be interpreted or applied inconsistently by employers. A representative from an MBHO told us that the IFR requirements for NQTLs could be interpreted in different ways, and the MBHO has seen variation in how employers are applying NQTLs in their plans. Representatives from an advocacy group reported that, in some cases, employers appear to be applying NQTLs more stringently to MH/SU benefits than to medical/surgical benefits. For example, according to the advocacy group, some plans require pre-authorization for inpatient care for MH/SU services for every 2-day period the care is expected to be given, but require pre-authorization for inpatient services for medical/surgical benefits less frequently. will be informed by the agencies’ findings, may result in employers’ further modification of their use of NQTLs in their benefit packages in order to comply with any new or modified requirements. Requiring more frequent pre-authorization can affect use of services. According to a study on the impact of pre-authorization on the use of mental health services, when an enrollee must obtain pre-authorization more frequently for outpatient mental health treatments, they are more likely to terminate treatment earlier. See X. Liu, et al., “The Impact of Prior Authorization on Outpatient Utilization in Managed Behavioral Health Plans,” Medical Care Research and Review, vol. 57, no. 2 (2000). Research indicates that enhanced coverage for MH/SU has generally led to reduced enrollee expenditures. Research also indicates that health insurance coverage for MH/SU has had mixed effects on access to, and use of, MH/SU services. In addition, little research has explored the effect of health insurance coverage for MH/SU on health status. Of the nine studies we reviewed that focused on the effect of health insurance coverage for MH/SU on enrollee expenditures, six studies generally found that the implementation of parity requirements led to reduced enrollee expenditures. Specifically, four of the nine studies examined mental health parity requirements in the Federal Employees Health Benefits Program (FEHBP) and found that implementing parity resulted in reductions in enrollee out-of-pocket costs. For example, one of these studies compared specific MH/SU benefits offered in FEHBP plans before and after the implementation of parity, and found that copayments and coinsurance for MH/SU services decreased by 50 percent or more after parity was implemented. Two of the nine studies examined the impact of state parity laws on expenditures and found that parity generally reduced enrollee expenditures. For example, one of these studies found that families with children in need of mental health services in parity states were more likely to have lower annual out-of-pocket costs than families with children in need of mental health services in nonparity states. Three of the nine studies examined other aspects of how health insurance coverage for MH/SU may impact enrollee expenditures that were unique to the scenarios or targeted populations studied. For example, one study examined differences in out-of-pocket spending among various populations and found that among individuals who use mental health services, out-of-pocket expenses were highest for those who were uninsured or enrolled in Medicare, compared with those who had private health insurance or were enrolled in Medicaid. Available research on access to, and use of, MH/SU services, as affected by health insurance coverage, was mixed. Of the 30 studies we reviewed on these topics, 17 studies found health insurance coverage for MH/SU— or enhanced insurance coverage through parity requirements—had some effect on access to, or use of, MH/SU services, whereas 13 studies found little to no effect. Of the 17 studies finding some effect of health insurance coverage on access to, or use of, MH/SU services: Six studies looked at a specific aspect of health insurance coverage— cost-sharing requirements, pre-authorization requirements, or the way MH/SU benefits are structured—and found that restricting coverage had a negative effect on enrollees’ use of services. Specifically, one study found that as cost-sharing increased among privately insured patients, the rate of substance use disorder treatment decreased. Another study found that when health plans increased the number of treatment sessions approved at a time, patients were less likely to prematurely terminate treatment. A third study found that as private health plans increased the use of managed care mechanisms, such as utilization review and prior authorization, children decreased their use of MH/SU services. Five studies indicated that plans with more comprehensive coverage were associated with a positive effect on access to, or use of, MH/SU services. For example, one study examined a large U.S.-based company that reduced copayments and made efforts to destigmatize mental illness, and found that the benefit design change led to an 18 percent increase in the probability of enrollees initiating mental health treatment. Four studies examined the effect of state parity requirements and, as a group, found a mixed effect on enrollees’ access to, or use of, MH/SU services. For example, one of these studies examined the effect of a state parity requirement within the first 3 years following implementation of parity requirements, and found that the implementation of parity requirements resulted in increased access to, and use of, mental health services; however, the implementation of parity resulted in reduced access to substance use disorder services.increased access to, or use of, MH/SU services for individuals with mild to moderate mental health needs, but that state parity Another study found that state parity requirements requirements had no effect on access to, or use of, MH/SU services for individuals with severe mental health needs. The remaining two studies found that state parity requirements increased access to, or use of, MH/SU services. Two studies found that being uninsured or having a certain type of insurance was associated with lower access to MH/SU services. For example, one study assessed the extent to which psychiatrists were accepting new patients with different types of insurance—Medicaid, Medicare, and private insurance—and with different types of care plans. This study found that psychiatrists were less likely to accept new patients in managed care plans and Medicaid than patients in nonmanaged private insurance plans and Medicare, indicating that the type of coverage patients have may affect their access to available providers. In contrast, 13 of the 30 studies we reviewed found little to no effect: Three studies examined the effect of mental health parity requirements in the FEHBP and found that enhanced coverage did not increase access to, or use of, MH/SU services. Six studies examined the effect of state mental health parity requirements on access to, or use of, MH/SU services and found little to no effect. One of these studies found a difference in the effect of state mental health parity requirements by employer size. Specifically, after implementation of state mental health parity requirements, enrollees from smaller employers—comprised of 50 to 100 employees—increased the use of mental health services after parity, while there was little or no effect on the use of mental health services for enrollees from larger employers—comprised of 100 or more employees. Four studies focused on the effect of health insurance coverage on access to, or use of, MH/SU services for a specific population, and also found that health insurance coverage had little to no effect on access to, or use of, MH/SU services. For example, two studies examined the effect of health insurance coverage on specific populations—children with special mental health service needs living in a rural area, or low-income, minority groups—and found that having private health insurance had little to no effect on use of services for either of these populations. Of the studies we reviewed, two studies examined the effect of health insurance coverage for MH/SU on health status of the general population. One study compared suicide rates among states with different parity requirements and found that state mandates did not have an effect on suicide rates. The other study found that increasing copayments was associated with an increased likelihood of the reoccurrence of substance use treatment. Specifically, each 10 percent increase in copayment was associated with a 1 percent increase in the probability of returning to begin a new course of substance use disorder treatment within 180 days. DOL and HHS reviewed a draft of this report and provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the Secretaries of the Department of Labor and the Department of Health and Human Services and appropriate congressional committees. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. To determine the extent to which employers cover mental health conditions and substance use disorders (MH/SU) both currently and in 2008, we surveyed a stratified random sample of small, medium, large, and very large employers about the MH/SU covered in their most popular health plans for the most current plan year—either in 2011 or 2010—as well as for 2008. We defined most popular health plan as the plan that covered the greatest number of lives. We fielded a web-based survey between May 18, 2011, and July 1, 2011, to 707 employers, selected from a sampling frame we developed using the Lexis Nexis corporate database. Our survey was designed to collect information about trends in employer coverage of MH/SU benefits, and included questions about coverage for MH/SU in the most current plan year—2011 or 2010—and in 2008. We conducted a survey of employers because we were unable to identify a published national employer survey that included specific detailed information about employers’ MH/SU benefits prior to and following MHPAEA—namely, information about diagnoses included in or excluded from coverage. For our survey, employers had the option of either completing the entire survey, including detailed questions about their most popular health plans’ cost-sharing requirements, or completing a portion of the survey and submitting to us their most popular health plans’ summary plan documents (SPD), which included information on the plans’ cost-sharing requirements. As part of the survey development process, we asked experts to review a draft version of the survey and we pretested the survey. We incorporated feedback from experts and the pretests into the survey. We selected a stratified random sample of 1,000 employers from our sampling frame. Our stratification divided employers into groups based on the number of employees—small employers had 51-199 employees; medium employers had 200-999 employees; large employers had 1,000- 4,999 employees; and very large employers had 5,000 or more employees. We obtained working e-mail addresses for 707 employers, which received the survey on May 18, 2011. The distribution of employer sizes among the final group of employers was similar to that in the original sample. When we closed the survey on July 1, 2011, after following up with nonrespondents by phone and e-mail to encourage their participation, 168 employers had submitted usable survey responses, for a response rate of 24 percent. Given the response rate, our survey results are not generalizable. Rather, the survey responses provide information limited to responding employers’ coverage of MH/SU in the current plan year and 2008 plan year. Specifically, we received usable survey responses from 91 small employers, 50 medium employers, 19 large employers, and 8 very large employers. All 168 employers offered coverage of mental health conditions, substance use disorders, or both, in either the current plan year, 2008 plan year, or both plan years. We expected all employers to respond to a key set of questions; however, not every employer that responded to our survey answered the key questions in their entirety. In addition, our survey included a series of detailed benefits questions which employers were expected to respond to only if the question applied to them. percentage of employers that did not respond to a question ranged from zero to 46 percent, depending on the question. We did not verify the accuracy of the employers’ responses or assess compliance with MHPAEA. The questions in the survey asking about treatment limitations, lifetime dollar limits, and cost-sharing amounts were open-ended responses. Employers could leave these questions blank if their most popular plans lacked these features. as the denominator for our calculations for responses to the detailed benefits questions for the current plan year, and used 123 as the denominator for our calculations for responses to the detailed benefits questions for the 2008 plan year. In instances where we analyzed responses from a smaller number of respondents, we noted this in the text. To supplement the data collected from our survey, we reviewed the results of published national employer surveys from the Kaiser Family Foundation and Health Research and Educational Trust (Kaiser/HRET) and Mercer. These surveys provided generalizable information on employers’ coverage of MH/SU. Since 1999, Kaiser/HRET has surveyed a sample of employers each year through telephone interviews with human resource and benefits managers and published the results in its annual report—Employer Health Benefits. Kaiser/HRET selects a random sample from a Survey Sampling International list of private employers and from the Census Bureau’s Census of Governments list of public employers with three or more employees. Kaiser/HRET then stratifies the sample by industry and employer size. It attempts to repeat interviews with employers that responded in prior years. For the most recently completed annual survey—conducted from January to May 2010 and published in September 2010—2,046 employers responded to the full survey, giving the survey a 47 percent response rate. Using statistical weights, Kaiser/HRET projected its results nationwide. Kaiser/HRET used the following definitions for employer size: (1) small—3 to 199 employees— and (2) large—200 and more employees. In some cases, Kaiser/HRET reported information for additional categories of small and large employer sizes. Since 1993, Mercer has surveyed a stratified random sample of employers each year through mail questionnaires and telephone interviews and published the results in its annual report—National Survey of Employer-Sponsored Health Plans. Mercer selects a random sample of private sector employers from a Dun & Bradstreet database, stratified into eight categories, and randomly selects public sector employers—state, county, and local governments—from the Census of Governments. The random sample of private sector and government employers represents employers with 10 or more employees. For the 2010 survey, which was published in 2011, Mercer mailed questionnaires to employers with 500 or more employees in July 2010 along with instructions for accessing a web- based version of the survey instrument, another option for participation. Employers with fewer than 500 employees, which historically have been less likely to respond using a paper questionnaire, were contacted to be given the option of responding to the survey by phone or by using the web-based survey. Telephone follow-up was conducted with employers with 500 or more employees in the random sample and some mail and web respondents were contacted by phone to clear up inconsistent or incomplete data. A total of 2,833 employers responded to the survey. By using statistical weights, Mercer projected its results nationwide and for four geographic regions. The Mercer survey report contains information for large employers—500 or more employees—and for categories of large employers with certain numbers of employees as well as information for small employers—those with fewer than 500 employees. Mercer used the same methodology for its 2008 survey, which was published in 2009. A total of 2,873 employers responded to the survey. According to a Mercer representative, in any given year, Mercer typically obtains a 25 percent response rate to its survey. We conducted interviews with agency officials and experts to learn about the implementation of MHPAEA and trends in employers’ coverage of MH/SU benefits. We spoke with agency officials from the Department of Labor (DOL), Department of Health and Human Services’s (HHS) Assistant Secretary for Planning and Evaluation (ASPE), and HHS’s Substance Abuse and Mental Health Services Administration who had expertise in MH/SU issues. We did not interview Treasury officials because the focus of this engagement did not relate to that agency’s scope of responsibility. We spoke with experts who included representatives from two large managed behavioral health organizations (MBHO); two large national insurance companies; mental health advocacy organizations; institutions that field employer-based surveys on health insurance coverage; a large benefits consulting firm; an insurance broker organization; and three trade associations. We also interviewed four employer survey respondents—one in each employer size category—to obtain more detailed information about the employers’ coverage of MH/SU, and their reasons for making or not making changes to coverage after the Paul Wellstone and Pete Domenici Mental Health Parity and Addiction Equity Act of 2008 (MHPAEA) took effect. For our literature review on the effect of health insurance coverage for MH/SU on enrollees’ health care expenditures, access to, or use of, MH/SU services, and health status, we conducted a key word search of nine databases, such as Medline and EMBASE, that included peer- reviewed journals and other periodicals to capture articles published between January 1, 2000, and March 11, 2011. We searched these databases for articles with key words in their title or article subject terms related to the effect of health insurance on health care expenditures or health status, using combinations and variations of the words “insurance coverage,” “mental health,” “substance use,” “health cost,” “health expenditure,” and “health status.” From these sources, we identified 246 abstracts of research articles, publications, and reports. After reviewing the abstracts, we included 34 studies that discussed the effect of health insurance coverage on enrollee expenditures, access to, or use of, MH/SU services, or health status. We also included articles in our literature review that were suggested to us by the experts we interviewed, as well as those that were referenced in the articles found during our initial search. We conducted a review of published studies between January 2000 and March 11, 2011, that included an assessment of the effect of health insurance coverage for mental health conditions and substance use disorders (MH/SU) on enrollee expenditures, access to, or use of, MH/SU services, or health status. We identified 34 such studies, 9 of which addressed the effect of health insurance coverage on enrollee expenditures, 30 of which discussed access to, or use of, MH/SU services, and 2 of which discussed health status. Some studies addressed more than one topic. Tables 2 through 4 identify the 34 studies included in our review, and whether we determined them to be relevant to the effect of health insurance coverage for MH/SU on enrollees’ health care expenditures, access to, or use of, MH/SU services, or health status. In addition to the contact named above, Jennifer Grover, Assistant Director; Martha Kelly, Assistant Director; Elizabeth Conklin; Jennifer DeYoung; Carolyn Fitzgerald; Giao N. Nguyen; Laurie Pachter; Monica Perez-Nelson; and Rachel Schulman made key contributions to this report. Private Health Insurance: Waivers of Restrictions on Annual Limits on Health Benefits. GAO-11-725R. Washington, D.C.: June 14, 2011. Private Health Insurance: Access to Individual Market Coverage May Be Restricted for Applicants with Mental Disorders. GAO-02-339. Washington, D.C.: February 28, 2002. Mental Health: Community-Based Care Increases for People with Serious Mental Illness. GAO-01-224. Washington, D.C.: December 19, 2000. Mental Health Parity Act: Employers’ Mental Health Benefits Remain Limited Despite New Federal Standards. GAO/T-HEHS-00-113. Washington, D.C.: May 18, 2000. Mental Health Parity Act: Despite New Federal Standards, Mental Health Benefits Remain Limited. GAO/HEHS-00-95. Washington, D.C.: May 10, 2000.
The Paul Wellstone and Pete Domenici Mental Health Parity and Addiction Equity Act of 2008 (MHPAEA) requires that employers who offer health insurance coverage for mental health conditions and substance use disorders (MH/SU) provide coverage that is no more restrictive than that offered for medical and surgical conditions. Employers were required to comply with the law for coverage that began on or after October 3, 2009. The Department of Labor (DOL), the Department of Health and Human Services (HHS), and the Department of the Treasury share oversight for MHPAEA. MHPAEA also requires GAO to examine trends in health insurance coverage of MH/SU. This report describes (1) the extent to which employers cover MH/SU through private health insurance plans, and how this coverage has changed since 2008; and (2) what is known about the effect of health insurance coverage for MH/SU on enrollees' health care expenditures; access to, or use of, MH/SU services; and health status. GAO surveyed a random sample of employers about their MH/SU coverage for the most current plan year and for 2008. GAO received usable responses from 168 employers--a 24 percent response rate. The survey results are not generalizable; rather, they provide information limited to responding employers' MH/SU coverage. GAO reviewed published national employer surveys on health insurance coverage and interviewed officials from DOL, HHS, and other experts. GAO also reviewed studies that evaluated the effect of MH/SU coverage on enrollees' expenditures, access to, or use of, MH/SU services, and health status. Most employers continued to offer coverage of MH/SU since MHPAEA was passed. Of the employers that responded to GAO's survey, 96 percent offered coverage of MH/SU for the current plan year and for 2008, before MHPAEA was passed. Approximately 2 percent of employers reported offering coverage for only mental health conditions but not substance use disorders for the current plan year and for 2008. Conversely, about 2 percent of employers reported discontinuing their coverage of both MH/SU or only substance use disorders in the current plan year. The types of MH/SU diagnoses included and excluded in employers' MH/SU benefits remained consistent between the current plan year and 2008. Of the employers who provided information about diagnoses included in their MH/SU benefits for both the current plan year and 2008, 34 percent reported that their most popular plan in the current plan year excluded at least one MH/SU diagnosis from their benefits, and 39 percent of employers reported excluding at least one MH/SU diagnosis from their benefits for the 2008 plan year. The most common change to MH/SU benefits reported among those who responded to the survey was enhancing benefits through the removal of treatment limitations, such as the number of allowed office visits. Reported use of lifetime dollar limits on MH/SU treatments also declined from 2008 to the current plan year. Among employers who reported information on cost-sharing, copayments and coinsurance amounts for in-network providers generally stayed about the same, fluctuating minimally from 2008 to the current plan year. Published national employer surveys on health insurance coverage also reported results consistent with GAO's survey data. Employers may continue to modify certain nonfinancial requirements--such as changes to the services they cover (the scope of services) and nonquantitative treatment limits--in their MH/SU benefits in response to agencies' issuance of final implementing regulations for MHPAEA. Officials from DOL and HHS reported that the final regulations may provide additional detail on these nonfinancial requirements. Research suggests that coverage for MH/SU has a varied effect on enrollees. Research examining the effect of health insurance coverage for MH/SU on enrollee expenditures generally found that the implementation of parity requirements reduced enrollee expenditures. Studies that examined the effect of health insurance coverage for MH/SU on enrollee access to, and use of, MH/SU services had mixed results, with some studies indicating there was little to no effect and others indicating that there was some effect--such as finding that restricting coverage had a negative effect on use of services. Little research has explored the relationship between health insurance coverage and health status. Of the studies we reviewed, two examined the effect of health insurance coverage for MH/SU on enrollee health status and found different effects. GAO provided a draft of the report to DOL and HHS. Both agencies provided technical comments, which have been incorporated as appropriate.
Since the end of World War II, the U.S. military has maintained a presence in Japan and on Okinawa, first as an occupation force and later as an ally committed to maintaining security in the Asia-Pacific region. The security relationship between the United States and Japan is defined through bilateral agreements and is managed through a joint process. Over half of the U.S. forces in Japan are on Okinawa, a presence that has caused increasing discontent among the people of Okinawa. In September 1995, after three U.S. servicemen raped an Okinawan schoolgirl, Japan and the United States formed the Special Action Committee on Okinawa (SACO) to find ways to limit the impact of the U.S. military presence on Okinawa. The Committee developed 27 recommendations to reduce the impact of U.S. operations. Since the end of World War II, the U.S. military has based forces in Japan and Okinawa. The U.S. military occupation of Japan began after World War II and continued until 1952, but the United States administered the Ryukyu Islands, including Okinawa, until 1972. Since the end of World War II, U.S. forces have mounted major operations from Japan when needed. Among the most important of these operations was the initial defense of South Korea in the 1950-53 Korean War, when Eighth U.S. Army units left occupation duties in Japan to help defend South Korea. The United States again used its bases in Japan and on Okinawa to fight the Vietnam War. Finally, elements of the III Marine Expeditionary Force deployed from their bases on Okinawa to the Persian Gulf during Operation Desert Storm in the early 1990s. To demonstrate a commitment to peace and security in the Asia-Pacific region, the United States has about 47,000 servicemembers, about half of all U.S. forces deployed in the Pacific region, stationed in Japan. Of the 47,000 U.S. servicemembers in Japan, over half are based on Okinawa, a subtropical island about 67 miles long and from 2 to 18 miles wide, with coral reefs in many offshore locations. In fiscal year 1997, U.S. forces on Okinawa occupied 58,072 acres of the land in the Okinawa prefecture. The security relationship between the United States and Japan is defined through bilateral agreements. The Treaty of Mutual Cooperation and Security, signed in January 1960 by the United States and Japan, specifies that each country recognizes that an attack against either country in the territory of Japan is dangerous to its peace and security and declares that both countries would respond to meet the common danger under their constitutional processes. The treaty also commits the two countries to consult with each other from time to time and grants to U.S. military forces the use of facilities and areas in Japan. Lastly, the treaty specifies that a separate Status of Forces Agreement will govern the use of these facilities and areas as well as the status of U.S. forces in Japan. The Status of Forces Agreement, signed on the same day as the treaty, permits the United States to bring servicemembers and their dependents into Japan. It also contains certain stipulations regarding U.S. forces in Japan, including some exemptions from import duties for items brought into Japan for the personal use of U.S. servicemembers; the right of the U.S. military services to operate exchanges, social clubs, newspapers, and theaters; and legal jurisdiction over U.S. servicemembers and their dependents accused of committing a crime in Japan. The agreement also (1) requires the United States to return land to Japan when the land is no longer needed, (2) specifies that the United States will perform maintenance on bases its occupies in Japan, and (3) relieves the United States of the obligation to restore bases in Japan to the condition they were in when they became available to the United States. U.S. Forces-Japan (USFJ) has interpreted this latter provision to mean that the United States is not required to conduct environmental cleanup on bases it closes in Japan. The agreement also required the United States and Japan to establish a Joint Committee as the means for consultation in implementing the agreement. In particular, the Joint Committee is responsible for determining what facilities U.S. forces need in Japan. The U.S.-Japan security relationship is managed through a joint process that includes meetings between the U.S. Secretaries of State and Defense and Japan’s Minister of Foreign Affairs and Minister of State for Defense, who make up the Security Consultative Committee. The Committee sets overall bilateral policy regarding the security relationship between the United States and Japan. Japan pays part of the cost of the U.S. forces stationed in its country with annual burden-sharing payments that totaled about $4.9 billion in fiscal year 1997. The annual payments fall into four categories. First, Japan paid about $712 million for leased land on which U.S. bases sit. Second, Japan provided about $1.7 billion in accordance with the Special Measures Agreement, under which Japan pays the costs of (1) local national labor employed by U.S. forces in Japan, (2) public utilities on U.S. bases, and (3) the transfer of U.S. forces’ training from U.S. bases to other facilities in Japan when Japan requests such transfers. Third, USFJ estimated that Japan provided about $876 million in indirect costs, such as rents foregone at fair market value and tax concessions. Last, although not covered by any agreements, Japan provided about $1.7 billion from its facilities budget for facilities and new construction which included new facilities under the Japan under the Japan Facilities Improvement Program, vicinity improvements, and relocation construction and other costs. Finally, in September 1997, the United States and Japan issued new Guidelines for U.S.-Japan Defense Cooperation that replaced the existing 1978 guidelines. The new guidelines provide for more effective cooperation between U.S. forces and Japan’s self-defense forces under “normal circumstances,” when an armed attack against Japan has occurred, and as a response to situations in areas surrounding Japan that could threaten Japan’s security. Discontent among the people of Okinawa about the impact of the U.S. presence on their land has been rising for years, particularly as the economic benefits of the U.S. presence have diminished and the people of Okinawa became relatively more prosperous, according to the Congressional Research Service. Among the chief complaints of the Okinawan people is that their prefecture hosts over half of the U.S. force presence in Japan and that about 75 percent of the total land used by U.S. forces in Japan is on Okinawa. Figure 1.1 shows the location and approximate size of major U.S. installations in the Okinawa prefecture. Some Okinawans feel the U.S. military presence has hampered economic development. Other Okinawans object to the noise generated by U.S. operations, especially around the Air Force’s Kadena Air Base and Marine Corps Air Station (MCAS) Futenma (which are located in the middle of urban areas), and risks to civilians from serious military accidents, including crashes of aircraft. In addition, some have objected to artillery live-fire exercises conducted in the Central Training Area. When the exercises were held, firing took place over prefectural highway 104, and the highway had to be closed to civilian traffic until the exercises concluded. The Okinawa prefectural government has also objected to the destruction of vegetation on nearby mountains in the artillery range’s impact area. Lastly, some perceive that crime committed by U.S. personnel and their dependents on Okinawa is a problem. The public outcry in Okinawa following the September 1995 abduction and rape of an Okinawan schoolgirl by three U.S. servicemembers brought to a head long-standing concerns among Okinawans about the impact of the U.S. presence and made it difficult for some members of the Japanese Diet to support the continued U.S. military presence in Japan. According to the Office of the Secretary of Defense, the continued ability of the United States to remain in Japan was at risk due to the outcry over the rape incident, and the United States and Japan had to do something to reduce the impact of the presence on Okinawans. To address Okinawans’ and Japanese legislators’ concerns, bilateral negotiations between the United States and Japan began, and the Security Consultative Committee established the Special Action Committee on Okinawa in November 1995. The Committee developed recommendations on ways to limit the impact of the U.S. military presence on Okinawans. On December 2, 1996, the U.S. Secretary of Defense, U.S. Ambassador to Japan, Japanese Minister of Foreign Affairs, and Minister of State and Director-General of the Defense Agency of Japan issued the Committee’s final report. According to USFJ, the SACO Final Report is not a binding bilateral agreement, but it does contain a series of recommendations to which the U.S. and Japanese governments have committed themselves. Officials from USFJ and Marine Corps Bases, Japan, told us that the United States approaches the recommendations as if they were agreements by making reasonable efforts to implement the recommendations. However, they also stated that if Japan does not provide adequate replacement facilities or complete action needed to implement some recommendations, the United States will not be obligated to implement those particular recommendations. In response to Representative Duncan Hunter’s concerns about the impact of implementing SACO’s recommendations on U.S. force readiness, we describe (1) the benefit or necessity of retaining U.S. forces in Japan and on Okinawa and (2) SACO’s report recommendations and identify the impact of implementation on U.S. operations, training, and costs. The report also identifies two environmental issues that may remain after the SACO recommendations have been implemented. To determine DOD’s views on the benefit or necessity of having U.S. forces stationed on Okinawa, we interviewed officials and obtained relevant documents, including the Quadrennial Defense Review report, the President’s National Security Strategy for a New Century, The Security Strategy for East Asia, the Commander-in-Chief of the Pacific Command’s regional strategy, and other documents. Because it was outside the scope of our work, we did not evaluate any alternatives to forward deployment. However, in a June 1997 report, we concluded that DOD had not adequately considered alternatives to forward presence to accomplish its stated security objectives. To determine U.S. and Japanese obligations under the bilateral security relationship, we reviewed the Treaty of Mutual Cooperation and Security between Japan and the United States, the Status of Forces Agreement, the Special Measures Agreement, Joint Statement of the Security Consultative Committee on the review of 1978 guidelines for defense cooperation, the new 1997 Guidelines for U.S.-Japan Defense Cooperation, and other documents. To determine SACO’s report recommendations, we reviewed the Final Report of the Special Action Committee on Okinawa, Joint Committee meeting minutes and related documents, briefings, the testimony of the Commander-in-Chief of the U.S. Pacific Command to the Senate Committee on Armed Services on March 18, 1997, and other documents. To determine the impact of the SACO report recommendations on readiness, training, and costs of operations of U.S. forces, we interviewed officials and reviewed memorandums, cables, reports, analyses, and other documents discussing the impact on readiness and training or providing evidence of the impact. To review the feasibility of construction and operation of a sea-based facility, we interviewed officials and reviewed relevant documents, including the Functional Analysis and Concept of Operations report prepared by DOD officials from several organizations, briefing documents, memorandums, and other documents. We also reviewed a number of scholarly papers presented at the Japanese Ministry of Transport’s International Workshop on Very Large Floating Structures, held in Hayama, Japan, in November 1996. To identify the environmental issues that could remain after the SACO recommendations are implemented, we reviewed the Status of Forces Agreement and DOD environmental policy and interviewed DOD and Department of State officials. We also interviewed officials at the Office of the Secretary of Defense/International Security Affairs, the Joint Staff, headquarters of the U.S. Marine Corps, headquarters of the U.S. Air Force, Office of Naval Research, Defense Logistics Agency, Military Traffic Management Command, and Department of State in Washington, D.C., and the U.S. Special Operations Command in Tampa, Florida. We also interviewed officials from the U.S. Pacific Command; Marine Forces, Pacific; Pacific Air Forces; Naval Facilities Engineering Command; Army Corps of Engineers; Military Traffic Management Command; and East-West Center in Honolulu, Hawaii. We interviewed officials from U.S. Forces-Japan, the 5th Air Force, U.S. Naval Forces-Japan, U.S. Army-Japan, and the U.S. Embassy-Tokyo in the Tokyo, Japan, area. Lastly, we interviewed officials from Marine Corps Bases, Japan; the 1st Marine Air Wing; the Air Force’s 18th Wing; the Army’s 1/1 Special Forces Group (Airborne); the Army’s 10th Area Support Group; the Navy’s Fleet Activities, Okinawa; and the Navy’s Task Force 76 on Okinawa. To discuss the feasibility of very large floating structures, we interviewed two ocean engineering professors at the University of Hawaii who were instrumental in organizing the 1996 conference in Japan. We also viewed the proposed site for a sea-based facility by helicopter and inspected several U.S. bases affected by the SACO process, including MCAS Futenma; Kadena Air Base; Camp Schwab; and the Northern, Central, Gimbaru, and Kin Blue Beach training areas on Okinawa. We also visited the Ie Jima parachute drop zone on Ie Jima Island. We obtained comments from the Departments of Defense and State on this report and have incorporated their comments where appropriate. We conducted our work from June 1997 to March 1998 in accordance with generally accepted government auditing standards. U.S. forces on Okinawa support U.S. national security and national military strategies to promote peace and maintain stability in the region. These forces can also deter aggression and can deploy throughout the region if needed. According to the Office of the Secretary of Defense, the Pacific Command, and USFJ, relocating these forces outside the region would increase political risk by appearing to decrease commitment to regional security and treaty obligations and undercut deterrence. Furthermore, relocating U.S. forces outside of Japan could adversely affect military operations by increasing transit times to areas where crises are occurring. Finally, the cost of the U.S. presence in Japan is shared by the government of Japan, which also provides bases and other infrastructure used by U.S. forces on Okinawa. The Commander-in-Chief of the Pacific Command, who is the geographic combatant commander for the Asia-Pacific region, develops a regional strategy to support the national security strategy and the national military strategy. The Pacific Command’s area of responsibility is the largest of that of the five geographic combatant commands: it covers about 105 million square miles (about 52 percent of the earth’s surface) and contains 44 countries, including Japan, China, India, and North and South Korea (see fig. 2.1). Pacific Command forces provide a military presence in the Asia-Pacific region, promote international security relationships in the region, and deter aggression and prevent conflict through a crisis response capability, according to the Pacific Command. These forces include over 300,000 servicemembers, of which about 100,000 are in Alaska, Hawaii, Japan, South Korea, and certain other locations overseas. The Quadrennial Defense Review reaffirmed the need for the U.S. forward presence of about 100,000 U.S. troops in the Asia-Pacific region. About 47,000 U.S. servicemembers are stationed in Japan. Of those, about 28,000 are based on Okinawa, including about 17,000 assigned to the Marine Corps’ III Marine Expeditionary Force and supporting establishment. The III Marine Expeditionary Force, the primary Marine Corps component on Okinawa, consists of the (1) 3rd Marine Division, the ground combat component; (2) 1st Marine Air Wing, the air combat component; (3) 3rd Force Service Support Group, the logistics support component; and (4) command element. The Force, and other deployed U.S. forces, support the security strategy by providing the forces that could be employed if crises arise. The III Marine Expeditionary Force can deploy throughout the region, using sealift, airlift, and amphibious shipping, and operate without outside support for up to 60 days. Under the national strategy, U.S. forward deployment is necessary because it demonstrates a visible political commitment by the United States to peace and stability in the region, according to DOD. The United States has mutual defense treaties with Japan, South Korea, the Philippines, Australia, and Thailand. In addition to demonstrating commitment, the U.S. forward deployment also deters aggression, according to the Pacific Command, because a regional aggressor cannot threaten its neighbors without risking a military confrontation with U.S. forces in place on Okinawa (or elsewhere in the region). To help maintain peace and stability in the region, the Pacific Command strategy features engagement through joint, combined, and multilateral military exercises; military-to-military contacts; and security assistance, among other activities. According to the Pacific Command, the III Marine Expeditionary Force is a key force that is employed to carry out these activities. According to the Office of the Secretary of Defense, Pacific Command, and USFJ, a withdrawal of U.S. forces from the region could be interpreted by countries in the region as a weakening of the U.S. commitment to peace and stability in Asia-Pacific and could undercut the deterrent value of the forward deployment. While U.S. forces may not have to be on Okinawa specifically for the United States to demonstrate such commitments, USFJ officials told us that U.S. forces do need to be located somewhere in the Western Pacific region. If hostilities erupt in the Asia-Pacific region, U.S. forces need to arrive in the crisis area quickly to repel aggression and end the conflict on terms favorable to the United States. U.S. forces could be used in a conflict and could deploy from their bases on Okinawa. The forward deployment on Okinawa significantly shortens transit times, thereby promoting early arrival in potential regional trouble spots such as the Korean peninsula and the Taiwan straits, a significant benefit in the initial stages of a conflict. For example, it takes 2 hours to fly to the Korean peninsula from Okinawa, as compared with about 5 hours from Guam, 11 hours from Hawaii, and 16 hours from the continental United States. Similarly, it takes about 1 1/2 days to make the trip from Okinawa by ship to South Korea, as compared with about 5 days from Guam, 12 days from Hawaii, and 17 days from the continental United States. In addition to its strategic location, Okinawa has a well-established military infrastructure that is provided to the United States rent-free and that supports the III Marine Expeditionary Force (and other U.S. forces). Housing, training, communications, and numerous other facilities are already in place on Okinawa, including those at MCAS Futenma, a strategic airfield for the 1st Marine Air Wing, and Camp Courtney, home of the 3rd Marine Division. Marine Corps logistics operations are based at Camp Kinser, which has about a million square feet of warehouse space for Marine forces’ use in the Pacific. For example, warehouses hold war reserve supplies on Okinawa that would support U.S. operations, including 14,400 tons of ammunition, 5,000 pieces of unit and individual equipment, and 50 million gallons of fuel. Military port facilities capable of handling military sealift ships and amphibious ships are available at the Army’s Naha Military Port and the Navy’s White Beach. In addition to providing base infrastructure, Japan provides about $368 million per year as part of its burden-sharing to help support the III Marine Expeditionary Force deployment on Okinawa. The SACO Final Report calls for the United States to (1) return land at 11 U.S. bases on Okinawa and replace MCAS Futenma with a sea-based facility, (2) change 3 operational procedures, (3) implement 5 noise abatement procedures, and (4) implement 7 Status of Forces Agreement changes. Japan agreed to implement one Status of Force Agreement procedure change. Of all of the SACO report recommendations, replacing MCAS Futenma with a sea-based facility poses the greatest challenge. Most of the other SACO report recommendations can be implemented with few problems. As called for in the SACO Final Report, the United States plans to return to Japan about 12,000 acres, or 21 percent of the total acreage, used by U.S. forces on 11 installations. The plan is to relocate personnel and facilities from bases to be closed to new bases or to consolidate them at the remaining bases. Table 3.1 shows the land to be returned, the planned return date, and the plan for replacing capabilities that would be lost through the land return. The most significant land deal involves the planned closure and return of MCAS Futenma. The installation is a critical component of the Marine Corps’ forward deployment because it is the home base of the 1st Marine Air Wing. The Wing’s primary mission is to participate as the air component of the III Marine Expeditionary Force. The wing’s Marine Air Group-36 provides tactical fixed and rotary wing aircraft and flies about 70 aircraft, including CH-46 and CH-53 helicopters and KC-130 aerial refueling airplanes. MCAS Futenma’s primary mission is to maintain and operate facilities and provide services and materials to support Marine aircraft operations. MCAS Futenma covers 1,188 acres of land and is completely surrounded by the urbanized growth of Ginowan City, as shown in figure 3.1. Officials in the Office of the Secretary of Defense, USFJ, and Marine Corps Bases, Japan, told us that encroachment along the perimeter of MCAS Futenma is a concern. In fact, according to Marine Corps Bases, Japan, in one instance, the owner of land outside MCAS Futenma erected a building at the end of the runway that was tall enough to create a hazard to aircraft using the base. The building was removed. The land at MCAS Futenma is leased from about 2,000 private landowners by the government of Japan. About 40 percent of the base is used for runways, taxiways, and aircraft parking. The remaining portions of the base are used for air operations, personnel support facilities, housing, and administrative activities. MCAS Futenma has a runway and parallel taxiway that are 9,000 feet long as well as an aircraft washrack, maintenance facilities, vehicle maintenance facilities, fuel storage facilities, a hazardous waste storage and transfer facility, a control tower, an armory, and other facilities needed to operate a Marine Corps air station. If the Marine Corps presence is to be maintained with air and ground combat units and logistical support collocated on Okinawa, then MCAS Futenma or a suitable replacement is required to maintain the operational capability of the III Marine Expeditionary Force’s air combat element. The U.S. and Japanese governments established a working group to examine three options for replacing MCAS Futenma. The options were relocation of the air station onto (1) Kadena Air Base, (2) Camp Schwab, or (3) a sea-based facility to be located in the ocean offshore from Okinawa Island. The SACO Final Report stated that the sea-based facility was judged to be the best option to enhance the safety and quality of life of the Okinawan people and maintain the operational capabilities of U.S. forces. The report also cited as a benefit that a sea-based facility could be removed when no longer needed. Acquisition of the sea-based facility would follow a process that began with the United States’ establishing operational and quality-of-life requirements and would conclude with Japan’s selecting, financing, designing, and building the sea-based facility to meet U.S. requirements. The government of Japan has decided to locate the sea-based facility offshore from Camp Schwab. However, at the time of our review some residents living near the proposed site had opposed having the sea-based facility near their community, but U.S. officials are proceeding on the basis that the facility will be built. The Security Consultative Committee established the Futenma Implementation Group to identify a relocation site and an implementation plan for the transfer from MCAS Futenma to the sea-based facility. On the U.S. side, the Group is chaired by the Deputy Assistant Secretary of Defense for International Security Affairs and has representatives from the Joint Staff; the headquarters of the Marine Corps; the Assistant Secretary of the Navy for Installations and Environment; the Pacific Command; USFJ; the Office of Japanese Affairs, Department of State; and the Political-Military Affairs Section of the U.S. Embassy-Tokyo. The Group was established to oversee the design, construction, testing, and transfer of assets to the sea-based facility. MCAS Futenma will not be closed until the sea-based facility is operational. Only when U.S. operating and support requirements have been met will Marine Air Group-36 and its rotary wing aircraft relocate to the sea-based facility. As part of the closure and return of MCAS Futenma, 12 KC-130 aircraft are scheduled to relocate to MCAS Iwakuni, on the Japanese mainland, after Japan builds new maintenance and other facilities to support the relocation. In addition, Japan is scheduled to build other support facilities at Kadena Air Base to support aircraft maintenance and logistics operations that are to relocate there. Ground elements of the 1st Marine Air Wing not relocated to the sea-based facility would relocate to other bases on Okinawa. The sea-based facility is to be designed by Japan to meet U.S. operational requirements. During regular operations, about 66 helicopters and MV-22 aircraft (when fielded) would be stationed aboard the sea-based facility. The MV-22 can operate in either vertical takeoff and landing mode, like a helicopter, or short takeoff and landing mode, like an airplane. The sea-based facility airfield requirements are based on MV-22 operating requirements. According to a Marine Corps study, a runway length of 2,600 feet is sufficient for normal day-to-day operations, training missions, and self-deployment to Korea in its vertical takeoff and landing mode under most conditions. The Pacific Command has established a 4,200-foot runway for all MV-22 operations based on aircraft performance and meteorological data. The Marine Corps study indicates that a 4,200-foot runway is sufficient for most training and mission requirements. However, the study also stated that for missions requiring an MV-22 gross weight near the maximum of 59,305 pounds, the aircraft would have to operate in its short takeoff mode and would require a runway of 5,112 feet under certain weather conditions. The United States has established a runway length requirement of about 4,200 feet for the sea-based facility. Arresting gear would be located about 1,200 feet from either end of the runway to permit carrier aircraft to land. In addition, the runway would have 328-foot overruns at each end to provide a safety margin in case a pilot overshoots the optimal landing spot during an approach and a parallel taxiway about 75 feet wide alongside the runway. Additional aircraft facilities include a drive-through rinse facility for aircraft corrosion control, an air traffic control tower, and aircraft firefighting and rescue facilities. Up to 10,000 pounds of ordnance would be stored in a magazine collocated with an ordnance assembly area aboard the sea-based facility. Also, flight simulators and security and rescue boat operations, among other capabilities, are required aboard the sea-based facility. Aircraft maintenance would be performed aboard the sea-based facility. Marine Air Group-36 requires hangar space for five helicopter squadrons, including space for Marine Corps air logistics; corrosion control; aircraft maintenance; secure storage; administrative functions; ground support equipment; and engine test cells, among other facilities. Logistics operations requirements aboard the sea-based facility include aircraft supply and fuel/oil supply, spill response capability, and parking for up to 800 personally owned and government-owned vehicles. MCAS Futenma can store about 828,000 gallons of aircraft fuel. At the time of our review, the United States had not determined how much fuel storage capacity was needed, or how fuel is to be provided to support sea-based facility operations. Food service for about 1,400 on-duty servicemembers per meal would be required on the sea-based facility to provide meals during the day and for crews working nights. The United States planned to locate the headquarters, logistics, and most operational activities aboard the sea-based facility and most quality-of-life activities, including housing, food service, and medical and dental services, ashore at Camp Schwab. U.S. officials estimate that over 2,500 servicemembers currently stationed at MCAS Futenma would transfer to the sea-based facility and Camp Schwab. To accommodate the incoming arrivals from MCAS Futenma, Marine Corps Bases, Japan, plans to relocate about 800 to 1,000 servicemembers currently housed at Camp Schwab to Camp Hansen and absorb the remainder at Camp Schwab. U.S. engineers estimated that about 1,900 people would work on the sea-based facility. Due to a lack of DOD dependent schools in the Camp Schwab area, only unmarried servicemembers will be housed at Camp Schwab. Servicemembers accompanied by dependents will be housed where most of them and most of the DOD schools (including the only two high schools) are located now, although not on MCAS Futenma. Marine Corps Bases, Japan, would have to either house all incoming servicemembers on or near Camp Schwab and bus their dependent children to the schools or keep servicemembers who have dependents housed in the southern part of the island and have them commute to work. Marine Corps Bases, Japan, chose the latter. Japan will design, build, and pay for the sea-based facility and plans to locate it offshore from Camp Schwab. The sea-based facility is be provided rent-free to USFJ, which would then provide it to the 1st Marine Air Wing. Government of Japan, ocean engineering and other university professors, and other experts have concluded that three types of sea-based facilities are technically feasible—the pontoon-type, pile-supported-type, or semisubmersible-type. A pontoon-type sea-based facility would essentially be a large platform that would float in the water on pontoons (see fig. 3.2). The structure would be located about 3,000 feet from shore in about 100 feet of water. Part of the platform would be below the water line. To keep the sea relatively calm around the platform, a breakwater would be installed to absorb the wave action. The breakwater would be constructed in about 60 feet of water atop a coral ridge. To prevent the structure from floating away, it would be attached to a mooring system attached to the sea floor. The pontoon-type sea-based facility envisioned would have a runway and control tower on the deck and most maintenance, storage, and personnel support activities (such as food service) below deck. According to documents that we obtained, no floating structure of the size required has ever been built. In addition, Naval Facilities Engineering Command officials told us that construction of a breakwater in about 60 feet of water would be “at the edge of technical feasibility.” A pile-supported sea-based facility essentially would be a large platform supported by columns, or piles, driven into the sea floor (see fig. 3.3). The structure would be located in about 16 feet to 82 feet of water and relatively closer to shore than the proposed pontoon-type sea-based facility. According to Naval Engineers, about 7,000 piles would be needed to support a structure of the size proposed. The pile-supported sea-based facility envisioned would have one deck. In addition to the runway and control tower, maintenance, storage, and personnel support activities would be in buildings on the deck. Structures similar to the pile-supported sea-based facility have already been built for other purposes. The semisubmersible-type sea-based facility would consist of a platform above the water line supported by a series of floating underwater hulls (see fig. 3.4). The facility would have interconnected modules with a runway and control tower atop the deck and maintenance, storage, and other functions on a lower deck. The semisubmersible sea-based facility relies on technology that does not yet exist, according to documents provided by DOD. For example, documents indicate that semisubmersible sea-based facilities are limited by current technology to about 1,000 feet in length. The United States and/or Japan are likely to encounter high costs, technological challenges, and operational complications in designing, constructing, and operating the sea-based facility. The sea-based facility is estimated to cost Japan between $2.4 billion and $4.9 billion to design and build. Operations and support costs are expected to be much higher on the sea-based facility than at MCAS Futenma. Under the Status of Forces Agreement, the United States pays for the maintenance of bases it uses in Japan. Based on a $4-billion sea-based facility design and construction cost, U.S. engineers have initially estimated maintenance costs to be about $8 billion over the 40-year life span of the facility. Thus, annual maintenance would cost about $200 million, compared with about $2.8 million spent at MCAS Futenma. At the time of this report, the United States and Japan were discussing having Japan pay for maintenance on the sea-based facility. If Japan does not pay maintenance costs, then the U.S. cost related to the SACO recommendations could be much higher. In addition to potential increased maintenance costs, the United States may spend money to renovate facilities at MCAS Futenma previously identified by both the U.S. and Japanese governments for replacement by Japan. Because of the planned closure of MCAS Futenma, the government of Japan cancelled about $140 million worth of projects at the air station that were to be funded under Japan’s Facilities Improvement Program. The United States believes these facilities are important to Futenma’s operations until the sea-based facility is ready. Marine Forces, Japan, has requested $13.6 million in U.S. funds to complete some of those projects. During the 10-year sea-based facility acquisition period, some of the other projects may be needed to continue to operate MCAS Futenma. If the government of Japan does not fund these projects for MCAS Futenma, the United States will have to choose between the added risk of operating from decaying facilities or pay additional renovation costs at a base scheduled for closure. Technological challenges may arise because no sea-based facility of the type and scale envisioned has ever been built to serve as an air base. To address these challenges and develop sea-based facility operational and support requirements, the Naval Facilities Engineering Command convened a working group in August 1997. In its report, the group concluded that for the three sea-based facilities being considered, “none of these technologies has been demonstrated to the scale envisioned.” The report described numerous challenges that would have to be overcome to make a sea-based facility viable. For example, the sea-based facility would have to survive natural events such as typhoons, which strike within 180 nautical miles of Okinawa Island an average of four times per year. During a typhoon, personnel would evacuate the sea-based facility, but the aircraft would remain aboard the facility in hangars to ride out the storm, according to 1st Marine Air Wing officials. U.S. engineers we spoke with indicated that a pile-supported sea-based facility’s underside would have to withstand pressure caused by storm-tossed waves slamming beneath the deck, and the pontoon- and semisubmersible-type sea-based facilities must be designed to avoid instability or sinking. Tsunamis are also a threat. In a tsunami, the water level near shore generally drops (sometimes substantially) and then rises to great heights, causing large, destructive waves. U.S. engineers we spoke with indicated that a floating sea-based facility’s mooring system would have to permit the floating structure to drop with the water level without hitting bottom and then rise as the waves returned. Also, structural issues pose technological challenges. The sea-based facility would have to be invulnerable to sinking or capsizing and resume normal operations within 24 to 48 hours after an aircraft crash, an accident involving ordnance aboard the facility, or an attack in wartime or by terrorists. An issue involving the pontoon and semisubmersible facilities is the potential for them to become unstable if an interior compartment is flooded. Thus, watertight doors and compartments (similar to those on ships) may be required. Corrosion control is a major concern because the facility would always be in salt water. Therefore, that part of the structure below the waterline would have to be built to minimize or resist corrosion for the 40-year life span of the facility, or a method of identifying and repairing corrosion (possibly underwater) without disrupting military operations would have to be devised. The Marine Corps may experience operational complications because the proposed length of the sea-based facility runway can compromise safety margins when an MV-22 aircraft is taking off at maximum weight under wet runway conditions. Since the MV-22 requires a 5,112-foot runway to take off at its maximum weight of 59,305 pounds and maintain maximum safety margins on a wet runway, the proposed 4,200-foot runway for the sea-based facility is too short. While the MV-22 can take off from a 4,200-foot runway at its maximum weight, in the event of an engine failure, or other emergency, on a wet runway, the safety margin is reduced. This risks the loss of the aircraft because the stopping distance for an aborted takeoff is longer on a wet runway than the runway planned. According to the Pacific Command, conditions that require more than 4,200 feet for takeoff would not preclude effective MV-22 contingency missions. A commander would need to make a decision to accept the increased risk of aircraft loss based on the criticality of the mission, or to reduce the aircraft’s load. The Pacific Command considers the risk acceptable and accepted the reduced the size of the sea-based facility. Alternatively, with a reduced load, MV-22s could take off from the sea-based facility without a full fuel load, use Kadena Air Base to finish fueling to capacity, and take off from its longer runway to continue the mission. However, this requires Kadena Air Base to absorb increased air traffic and risks later arrival in an area of operations. Ultimately, the added risk, time, and coordination are problems that would not occur at MCAS Futenma because its 9,000-foot runway is long enough for all MV-22 missions. Also, if Kadena Air Base is not available for MV-22 operations, the Marines would have no alternative U.S. military runway of sufficient length on Okinawa to support MV-22 missions at its maximum weight and maintain maximum safety margins in certain weather conditions. Moreover, the loss of MCAS Futenma’s runway equates to the loss of an emergency landing strip for fixed-wing aircraft in the area. However, safety margins may not be compromised even if Kadena Air Base is shut down (for weather or other reasons), MCAS Futenma is closed, and the sea-based facility’s runway as currently designed is too short for certain aircraft, because Naha International Airport would be available as an emergency landing strip for U.S. military aircraft. USFJ and Naval Facilities Engineering Command officials told us that the United States must oversee the design, engineering, and construction of the sea-based facility to ensure that it meets U.S. requirements, is operationally adequate, and is affordable to operate and maintain. However, current staff and funding resources are dedicated to managing other programs associated with the U.S. presence in Japan. Therefore, USFJ has requested establishment of a Project Management Office to oversee and coordinate SACO implementation while the Naval Facilities Engineering Command has asked for funding for a special project office to oversee the design and construction of the sea-based facility. In addition to the high cost, technological challenges, and operational complications that stem from the planned sea-based facility and limited U.S. oversight of the project, Japan’s sea-based facility acquisition strategy compounds the risk. At the time of our review, Japan did not have a risk-reduction phase planned to demonstrate that the design of the sea-based facility meets U.S. operating and affordability requirements. A risk-reduction phase could include risk assessments, life-cycle cost analyses, and design tradeoffs. DOD’s policy is to include a risk-reduction phase in its acquisition of major systems. U.S. officials believe it will take up to 10 years to design, build, and relocate to the sea-based facility as compared with the 5 to 7 years estimated in the SACO Final Report. On the other hand, these officials also believe that adding time to the project is a price worth paying to include a risk-reduction phase. Given the scope, technical challenges, and unique nature of the sea-based facility, including a risk-reduction phase would permit the U.S. and Japanese governments to establish that the proposed sea-based facility will be affordable and operationally suitable. The inclusion of a risk-reduction phase in the sea-based facility’s acquisition schedule is currently being discussed between the U.S. and Japanese governments. U.S. forces on Okinawa will face minimal risks to operations from the remaining 10 land return issues. The services can maintain training opportunities and deployment plans and schedules, because land to be returned is no longer needed or will be returned only after Japan provides adequate replacement facilities on existing bases or adds land by extending other base boundaries. First, while the Northern Training Area is still used extensively for combat skills training, about 9,900 acres can be returned to Japan because that land is no longer needed by the United States. The Marine Corps will retain about 9,400 acres of the Northern Training Area and expects to be able to continue all needed training on the remaining acreage. The return of the 9,900 acres is contingent on Japan’s relocating helicopter landing zones within what will remain of the Northern Training Area. In addition, the adjacent Aha training area can be returned without risk once Japan provides new shoreline access to the Northern Training Area to replace what would be lost by the closure and return of the Aha training area. Likewise, return of the Gimbaru training area presents low risk because the helicopter landing zone is to be relocated to the nearby Kin Blue Beach training area and the vehicle washrack and firefighting training tower will be relocated to Camp Hansen. The Yomitan auxiliary airfield can be returned because parachute drop training conducted there has already been transferred to the Ie Jima auxiliary airfield on Ie Jima Island, just off the northwest coast of Okinawa Island. Lastly, the Sobe communication station can be returned because it will be relocated to the remaining Northern Training Area, and Naha Port can be returned when it is replaced by a suitable facility elsewhere on Okinawa. While risks from the return of land (other than that related to MCAS Futenma) are minimal, the United States expects some benefits from the consolidation of housing on the remaining portion of Camp Zukeran. First, the SACO Final Report calls on Japan to build a new naval hospital on Camp Zukeran to replace the existing hospital on that part of Camp Kuwae scheduled for return. Marine Corps Bases, Japan estimated the construction cost to be about $300 million, which Japan is scheduled to pay. In addition, Japan is to provide 2,041 new or reconstructed housing units at Camp Zukeran as part of the SACO process and another 1,473 reconstructed housing units at Kadena Air Base, which is not part of SACO’s recommendations. Air Force 18th Wing civil engineering officials estimated the total housing construction cost at about $2 billion. The 18th Wing has requested establishment of a special project office to help with the design of the housing units and to ensure that the units meet U.S. health and safety code standards. The current estimated cost to the United States to implement the recommendations related to the return of land is about $193.5 million over about 10 years. This includes (1) $80 million to furnish the new hospital; (2) $71 million for the Futenma Implementation Group; (3) $8.2 million to furnish 2,041 housing units; (4) $8.1 million for USFJ to oversee and coordinate SACO implementation; (5) $8 million for the Naval Facilities Engineering Command project office to oversee the sea-based facility’s engineering and construction; (6) $4.4 million for a special project office for oversight of the housing project and master plan; and (7) $13.6 million for MCAS Futenma projects that would have been paid for by Japan had it not cancelled funding for the base. DOD officials told us that the U.S. and Japanese governments were negotiating an arrangement whereby Japan might assume those portions of the $71 million in costs which they can pay (and still comply with their domestic laws), for the Futenma Implementation Group. This arrangement could reduce U.S. costs below the current estimate of $193.5 million. Also, some initial costs may be offset in later years because the 18th Wing expects maintenance costs will be lower at the new hospital and housing. However, U. S. costs could be significantly higher than the $193.5 million estimate because the United States and Japan have not agreed on which country would be responsible for the sea-based facility’s maintenance. The United States has already implemented all three changes in training and operational procedures called for in the SACO Final Report (see table 3.2). The 3rd Marine Division’s artillery live-fire exercises have been relocated from the Central Training Area on Okinawa to the Kita-Fuji, Higashi-Fuji, Ojojihara, Yausubetsu, and Hijudai training ranges on the Japanese mainland. Prior to the SACO Final Report, the 3rd Marine Division was already conducting 60 to 80 days of artillery live-fire exercises at the two Fuji ranges. Under the SACO relocation, another 35 days of training will be split among the five ranges. Japan has agreed to pay transportation costs to the artillery ranges and wants to use Japanese commercial airliners for this purpose. The III Marine Expeditionary Force believes the training at the five ranges is comparable to that available on Okinawa and other ranges in the United States. At the time of our review, the Marine Corps had successfully completed one relocated artillery live-fire exercise each at the Kita-Fuji and Yausubetsu ranges. The relocation has had virtually no impact on deployment plans and schedules, according to III Marine Expeditionary Force officials. In addition to the artillery training relocation, the United States has transferred parachute jump training conducted by the Army’s 1st Battalion, 1st Special Forces Group (Airborne), from the Yomitan auxiliary airfield (which was closed) to the auxiliary airfield on Ie Jima Island, just off the northwest coast of Okinawa. However, special forces soldiers are at increased risk of failing to maintain airborne qualifications because parachute operations training has proven more difficult to complete on Ie Jima Island. About 73 percent of the training jumps scheduled between July 1996 and September 1997 on Ie Jima Island were canceled due to adverse weather at the drop zone; adverse weather at sea, preventing required safety boats from standing by in the event a parachutist landed in the water; and equipment problems that prevented the safety boats from departing their berths. The relocation has not affected operational deployments and schedules, although training deployments have been disrupted. Lastly, the Marine Corps has already ended conditioning hikes for troops on public roads off base and transferred those hikes to roads within U.S. bases. USFJ and Marine Corps Bases, Japan, indicated that this has not cost the United States any money and has had no impact on operational capability, deployment plans and schedules, or training. As requested, we also reviewed the impact of the SACO Final Report recommendations on bomber operations in the Pacific, although bomber operations were not specifically addressed by the SACO report. According to the headquarters of the Air Force, Pacific Air Forces, and 18th Wing, the SACO Final Report recommendations will have no impact on bomber operations in the Pacific. The United States has implemented two noise reduction initiatives at Kadena Air Base and MCAS Futenma called for in the SACO Final Report. Three more noise reduction initiatives are to be implemented after Japan constructs new facilities. Table 3.3 shows the status of the five noise reduction initiatives and U.S. plans for maintaining training and operational capability after their implementation. The United States will encounter few problems from the noise abatement procedures, according to USFJ; Marine Corps Bases, Japan; and the 18th Wing. Commanders at MCAS Futenma and Kadena Air Base retain the right to order nighttime flying operations to maintain aircrew proficiency and meet all training, mission, and safety requirements. In fact, the noise abatement countermeasures have been in effect since March 1996, and commanders at both installations indicated that the procedures have not affected operational capability, deployment plans and schedules, or training. The United States has implemented seven of the eight changes to Status of Forces Agreement procedures called for in the SACO Final Report. Table 3.4 shows the new Status of Forces Agreement procedures. According to USFJ officials, with the exception of affixing number plates to official vehicles, the changes in Status of Forces Agreement procedures cost the United States nothing and had no impact on deployment plans, schedules, and training. The number plates cost about $30,000 according to USFJ officials. We recommend that the Secretary of Defense decide on the means to monitor the design, engineering, and construction of the sea-based facility; work with Japan to include a risk-reduction phase in the acquisition schedule to establish that the designed sea-based facility will be affordable and operationally suitable; take steps to ensure that all U.S. concerns, especially the costs of operations and maintenance on the sea-based facility and operational concerns, have been satisfactorily addressed before Japan begins to build the sea-based facility; and request the Japanese government to allocate funds for those projects at Futenma that were cancelled by Japan due to the planned closure of Futenma and are deemed essential to continued operations of the station and the 1st Marine Air Wing until completion of the replacement facility. In written comments on a draft of this report, DOD concurred with GAO’s recommendations and noted that the report effectively outlines the major operational and technical issues involved in realigning, consolidating, and reducing U.S. force presence on Okinawa, as set forth in the SACO process. DOD also noted that the role of Congress will be critical in maintaining the strategic relationship with Japan and therefore the GAO report was timely and welcome. DOD provided technical comments, which we have incorporated in our report where appropriate. The DOD response is printed in its entirety in appendix II. We also provided a copy of our draft report to the Department of State. In oral comments, the Department of State concurred with our report and offered one technical change which we incorporated into the report. It may take a decade or more to fully achieve all of the SACO’s recommendations, but two environmental issues may arise and remain during and after implementation. The first concerns the potential for environmental contamination on U.S. bases scheduled for closure. The second concerns the potential adverse impact on the environment from construction and operation of the sea-based facility. If environmental contamination is found on bases to be closed under the SACO process, cleanup could be expensive. As we noted in chapter 1, the Status of Forces Agreement does not require the United States to return bases in Japan to the condition they were in at the time they were provided to U.S. forces or to compensate Japan for not having done so. Thus, USFJ and Marine Corps Bases, Japan, officials believe that the United States is not obligated to do environmental cleanup at bases to be closed. Nevertheless, a 1995 DOD policy calls for the removal of known imminent and substantial dangers to health and safety due to environmental contamination caused by DOD operations on installations or facilities designated for return to the host nation overseas. Furthermore, if the bases are closed and the land returned to Japan and environmental contamination is subsequently found, redevelopment and reuse efforts planned for some of these facilities could be hampered. In fact, Marine Corps Bases, Japan, and other Okinawa-based U.S. forces were informed by a letter dated August 25, 1997, from the government of Japan’s Naha Defense Facilities Administration Bureau that the toxic substances mercury and polychlorinated biphenyls were found on the Onna communications site. The United States had closed the base and returned the land to Japan in November 1995 (a land return unrelated to the SACO process). The letter indicated that the presence of these substances has prevented the land from being returned to its owners and thus being available for reuse. The letter concludes by requesting that the United States conduct a survey, identify any contamination that may exist, and clean up bases scheduled for closure in the future. If the United States agrees to this request, land return under the SACO process could be affected. At the time of our review, the United States had not responded to the letter. If such a survey, sometimes called an environmental baseline survey, is conducted and contamination is found, cleanup could prove expensive. For example, environmental remediation at MCAS Tustin in California is expected to cost more than $53 million when completed. If a survey is conducted and contamination is found, a decision would be needed as to whether the United States or Japan would pay the cost. DOD’s position is that the sea-based facility should be constructed and operated in a manner that preserves and protects the natural resources of Okinawa, including the ocean environment and coral reefs that partially surround the island. Further, the United States and Japan, along with a substantial number of other countries, support an international coral reef initiative aimed at conservation and management of coral reefs and related ecosystems. Coral reefs are in the area in which the sea-based facility is tentatively to be located. However, two sea-based facility options currently under consideration have the potential to harm the coral reefs. The pontoon-type facility requires the installation of a large breakwater and several mooring stations onto the seafloor. The pile-supported facility requires several thousand support pilings that would need to be driven into the coral reef or seafloor and reinforced to withstand storm conditions. Both of these options require at least one, and possibly two, causeways connecting them to shore facilities. Numerous scientific studies show that large construction projects can cause damage to coral reefs and the nearby coastal areas. The government of Japan is evaluating the condition of the coral reef. The environment could also be contaminated through routine operations aboard the sea-based facility. The accidental runoff of cleaning fluids used to wash aircraft or unintentional fuel system leaks could contaminate the nearby ocean environment.
Pursuant to a congressional request, GAO reviewed the contents of the Final Report of the Special Action Committee on Okinawa (SACO), focusing on: (1) the impact on readiness of U.S. forces based on Okinawa after implementation of the report recommendations; (2) the U.S. cost of implementing the recommendations; and (3) the benefit or necessity of having U.S. Marine Corps forces on Okinawa. GAO noted that: (1) the Department of Defense (DOD) believes that Marine Corps forces along with other U.S. forces on Okinawa satisfy the U.S. national security strategy by visibly demonstrating the U.S. commitment to security in the region; (2) these forces are thought to deter aggression, provide a crisis response capability should deterrence fail, and avoid the risk that U.S. allies may interpret the withdrawal of forces as a lessening of U.S. commitment to peace and stability in the region; (3) Okinawa's proximity to potential regional trouble spots promotes the early arrival of U.S. military forces due to shorter transit times and reduces potential problems that could arise due to late arrival; (4) the cost of this presence is shared by the government of Japan, which provides bases and other infrastructure on Okinawa rent-free and pays part of the annual cost of Okinawa-based Marine Corps forces; (5) the SACO Final Report calls on the United States to: (a) return land that includes one base and portions of camps, sites, and training areas on Okinawa to Japan; (b) implement changes to three operational procedures; and (c) implement changes to five noise abatement procedures; (6) the United States has established requirements that Japan must meet as it designs, builds, and pays for the sea-based facility before the Marine Corps Air Station Futenma is closed and operations are moved to the sea-based facility; (7) such a facility has never been built and operated; (8) annual operations and maintenance costs for the sea-based facility were initially estimated at $200 million; (9) the United States requested that the Japanese government pay the cost to maintain the new sea-based facility, but as of the date of this report, it had not agreed to do so; (10) excluding the cost to operate the sea-based facility, the current estimated cost to the United States to implement the SACO land return recommendations is about $193.5 million over about 10 years; (11) the United States and Japan are negotiating an arrangement under which Japan would assume some SACO-related responsibilities consistent with their domestic laws; (12) this arrangement could result in reduced U.S. costs; (13) while final implementation of the SACO recommendations is intended to reduce the burden of U.S. forces' presence in Okinawa, two environmental issues could arise; (14) the first issue concerns the potential for environmental contamination being found on military facilities returned to Japan and responsibility for cleanup of those facilities; and (15) the second issue concerns the potential adverse effects that the construction and operation of the sea-based facility could have on the environment.
The EIC is a refundable tax credit available to low-income, working taxpayers. Congress created the credit in 1975 to offset the impact of Social Security taxes on low-income families and encourage low-income workers to seek employment rather than welfare. The amount of a taxpayer’s credit depends on the number of qualifying children who meet age, relationship, and residency tests and on the nature and amount of qualifying income. Taxpayers with children can claim the EIC if they (1) have at least one EIC qualifying child, (2) meet income tests, (3) file with any filing status except “married filing separately,” and (4) were not a nonresident alien for any part of the year. To claim the EIC without a qualifying child, taxpayers have to meet requirements 2, 3 and 4, be at least 25 but less than 65 at the end of the year, have lived in the United States for more than half the year, and must not be claimed as a dependent on another return. The credit amount gradually increases with increasing income, plateaus at a maximum amount, and then gradually decreases (in a “phase-out range”) until it reaches zero when the taxpayer’s earned income or AGI exceeds the allowable maximum. Taxpayers with AGI falling in the credit’s phase-out range are to receive the lesser amount resulting from using their earned income or AGI in calculating the credit. Recently, Congress made changes to EIC eligibility rules in the Personal Responsibility and Work Opportunity Reconciliation Act of 1996 (P.L. 104-193) and the Taxpayer Relief Act of 1997 (P.L. 105-34). These changes, affecting returns filed for tax year 1996 and after, denied the EIC to any taxpayer with investment income over a certain threshold ($2,250 for tax year 1997); defined a “modified AGI” to be used in calculating the credit that excludes certain losses from investments and businesses; denied the credit to taxpayers without valid Social Security numbers (SSN); excluded certain workfare payments from wages for EIC purposes. Table 1 compares the maximum EIC amounts and income limits for tax years 1994 and 1997. IRS checks individual returns, with and without the EIC, for compliance while the return is initially being processed and in the months after filing. Some noncompliance involves mathematical errors and other obvious mistakes made by taxpayers or their representatives in preparing the returns. Other noncompliance involves mistakes that can be detected only through an audit of the return. The easiest EIC mistakes to identify and correct are those that IRS classifies as math errors. These errors, identified as the return is processed, include EIC computation errors and certain qualifying errors (e.g., missing SSNs for taxpayers and their children). For returns filed on paper, staff in IRS’ service centers are to enter tax return and Schedule EIC data into computers that check for math errors. If a math error that affects EIC eligibility or the size of the EIC claim is found, IRS is to reduce or deny the EIC accordingly. IRS is to then send a notice to the taxpayer explaining the change to his or her tax liability and refund. Taxpayers have 60 days to protest IRS’ actions, either in writing or by telephone, and to provide additional data supporting their original claims. If taxpayers do not respond to IRS’ notice, they are to get no further correspondence from IRS about that matter unless they fail to pay any additional tax that was assessed as a result of IRS’ change. Returns that taxpayers attempt to submit electronically are subject to a series of computerized “filters” that screen the submission for accuracy and completeness. Submissions with computational mistakes or missing or invalid data are to be rejected. A taxpayer whose electronic submission has been rejected can either correct the mistake(s) and resubmit the electronic return or file the return on paper (with or without the corrections). If filed on paper, the return would be subjected to the math error procedures described in the preceding paragraph. The most serious form of noncompliance involves deliberate attempts to defraud the government through, for example, phony refund claims. IRS’ primary effort to identify fraudulent refund claims, including those involving the EIC, is the Questionable Refund Program (QRP), established in the 1970s and run by IRS’ Criminal Investigation Division. Using a scoring system based on known noncompliance patterns, an IRS computer program analyzes all incoming returns to identify those that are potentially fraudulent. Then, questionable refund detection teams in the 10 service centers are to perform more in-depth reviews and, if a return is considered fraudulent, stop any refund before it is issued. IRS’ examination units in service centers and district offices review other potentially erroneous EIC claims that do not meet the criteria for inclusion in the math error or questionable refund programs. Service center staff review cases that do not require face-to-face contact with the taxpayer. Cases requiring face-to-face contact are done by district offices. Questionable refund detection teams are to refer cases with EIC errors that are not considered fraudulent to the examination units. Examination staff may also review cases included in special enforcement or compliance research projects. When examination staff determine that an EIC claim is erroneous, they are to notify the taxpayer of that finding and advise the taxpayer of his or her appeal rights. If the taxpayer agrees with IRS’ finding or disagrees with the finding but fails to overturn it on appeal, the claimed EIC is to be disallowed or adjusted in accordance with the examiner’s findings. IRS has undertaken a series of EIC compliance studies in recent years. In the first study, IRS sampled returns with EIC claims that had been filed electronically during a 2-week period in January 1994. The results, which could be generalized only to electronic returns filed during that 2-week period, showed that 39 percent of the returns involved overstated EIC claims that represented 26 percent of the dollars claimed. To learn more about EIC compliance, IRS conducted a broader study of tax year 1994 returns filed both electronically and on paper. The results of that study, released in April 1997, are the subject of this report. In 1996, IRS began a third study involving tax year 1995 returns. As of June 1998, IRS had not completed its analysis of the data from that study. All three of these EIC compliance studies predated the SSN-related math error procedures that were first implemented in 1997. However, as noted later, IRS adjusted the findings of its tax year 1994 study to show what the noncompliance rate would have been if those procedures had been in place then. As part of a 5 year EIC compliance initiative begun in fiscal year 1998 and discussed later in this report, IRS plans to measure its progress in reducing the EIC overclaim rate through annual studies of returns filed with an EIC claim. According to IRS, the first study of about 2,500 tax year 1997 EIC returns filed from January through May 1998 is designed to provide a baseline measure of the validity of EIC claims and types of EIC errors. IRS’ time line for the study shows that it expects to have a final report prepared by December 31, 1999. The results of subsequent studies are to be compared with that baseline to identify changes in EIC compliance. Our objectives were to (1) evaluate IRS’ tax year 1994 EIC compliance study methodology to determine if the reported results were reasonably accurate, (2) identify the primary sources of EIC noncompliance found in that study, and (3) determine whether recent IRS compliance efforts are designed to address the primary sources of noncompliance. To evaluate IRS’ study methodology and the accuracy of IRS’ compliance study results, we reviewed written documentation on the study’s methodology, reviewed 122 case files, interviewed IRS and Treasury officials involved in the study, reviewed computer programs written by IRS and Treasury’s Office of Tax Analysis (OTA) that were used to create and edit the final dataset, and calculated confidence intervals for the data presented in IRS’ April 1997 report. To assess IRS’ methodology, we determined whether IRS used generally accepted social science standards, which include the use of (1) unbiased sample selection procedures, (2) data collection controls, (3) procedures to ensure quality of data used, and (4) appropriate statistical procedures to generalize the data gathered and analyzed. In doing so, we considered the following questions: Does the study population appear to represent the population of all EIC filers during the period from January 15 through April 21, 1995? Was the sample drawn in accordance with probability selection principles? Were sufficient data verifying compliance with all EIC eligibility requirements collected from the EIC claimant and other sources? Were IRS staff collecting the data knowledgeable of how to apply EIC eligibility rules? Did the data collection procedures include controls to help ensure consistency in the evaluation of cases? Was data entry into the final database verified? Was the database checked for internal consistency, outliers, and invalid codes? How precise were the reported overclaim estimates? We also reviewed available data on IRS’ design of the tax year 1997 EIC compliance study to see how, if at all, that study addressed problems we identified with the tax year 1994 study. To determine the primary sources of EIC noncompliance on tax year 1994 returns, we analyzed the tax year 1994 study dataset as provided by IRS and modified through OTA editing programs. All data are estimates based on the study sample. Accordingly, we calculated confidence intervals at the 95 percent confidence level to indicate the precision of the estimates. Unless otherwise noted, the confidence intervals for percentages are 5 percentage points or less; for other statistics, the intervals are 10 percent or less of the reported value. To determine whether recent IRS compliance efforts addressed the primary sources of noncompliance, we reviewed IRS documents to identify the scope of EIC-related activities and related implementation plans; interviewed officials responsible for designing and implementing EIC-related activities at IRS’ National Office, its Brookhaven, Cincinnati, and Fresno Service Centers, and its Northern California District Office; and obtained available data on the results of EIC programs. We did our work from September 1997 through May 1998 in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the Commissioner of Internal Revenue and the Secretary of the Treasury, or their designees. The Commissioner of Internal Revenue and Treasury Deputy Assistant Secretary (Tax Analysis) responded in letters dated July 2, 1998, and June 29, 1998, respectively. Their comments are summarized at the end of this letter and are reprinted in appendixes I and II. On July 1, 1998, we met with IRS officials, including the Deputy Chief of Operations, the Acting Assistant Commissioner for Customer Service, and the Assistant Commissioner for Research/Statistics of Income, to discuss the Commissioner’s comments. In addition, IRS and OTA provided technical comments on our draft. We made changes to the report in response to the comments where appropriate. IRS found that of $17.2 billion in EIC claimed during the January 15 to April 21, 1995, study period, taxpayers overclaimed $4.4 billion, or 25.8 percent of total EIC claimed. To determine whether this $4.4 billion overclaim estimate is reasonably accurate, we evaluated IRS’ study methodology. Our evaluation was based on the extent to which IRS used generally accepted social science standards for a research project of this kind. These standards include use of (1) unbiased sample selection procedures, (2) data collection controls, (3) procedures to ensure quality of data used, and (4) appropriate statistical procedures to generalize the data gathered and analyzed. Before discussing our analyses, however, it is important to put IRS’ study findings in perspective. For returns filed with an EIC claim, the tax year 1994 study was designed to evaluate taxpayers’ compliance with each EIC eligibility filing requirement, to produce an overall estimate of EIC amounts claimed in error, and to identify the sources of error. The study was not designed to detect or quantify EIC claims that taxpayers could have made, but did not. For example, the $4.4 billion overclaim estimate includes about $780 million in overclaims associated with errors in applying the AGI tiebreaker rule. That rule provides that if a child meets the conditions to be a qualifying child of more than one person, only the person who had the highest AGI may treat that child as a qualifying child. As the 1994 study was designed, if IRS determined under the AGI tiebreaker rules that a person claiming the EIC was not entitled to it because there was another person in the household with a higher income, IRS would disallow the claim and include it as an overclaim in computing the study results. However, because these overclaims are not offset by any claim that could have been made by the other person involved in the tiebreaker, the ultimate savings to the government could be less than $780 million. With this basic limitation in mind, we found that overall, IRS’ study was designed and conducted in such a way that it produced a reasonably accurate estimate of noncompliance on returns filed with an EIC claim. Although some issues with the study design affected the precision of the results, our analysis showed that these limitations did not affect the study’s major message or its usefulness in designing compliance approaches. IRS’ study is representative of taxpayers filing an EIC claim on a tax year 1994 return filed between January 15 and April 21, 1995. We found that IRS used an appropriate statistical sampling procedure to select the 2,046 returns included in the study, and the sample appears to represent taxpayers who filed an EIC claim during that period. As an indicator of whether the EIC study sample was representative of EIC returns filed during the study time frame, we assumed that study returns should have characteristics similar to EIC returns filed during the entire year, as measured in IRS’ SOI sample of tax year 1994 returns filed throughout 1995. IRS’ April 1997 compliance study report compared weighted data on the distribution of claimants by paper or electronic filing, filing status, number of qualifying children, AGI range, and source of income for the study sample and the full tax year 1994 SOI sample. With the exception of the percentage of EIC claimants reporting self-employment income (6.3 percent among taxpayers in the compliance study compared to 15.3 percent of taxpayers in the SOI sample), weighted compliance study data closely paralleled data based on the SOI sample. The apparent underrepresentation of claimants reporting self-employment income is discussed in more detail later. In evaluating how IRS collected data on the accuracy of taxpayers’ EIC claims, we considered the following criteria: (1) whether IRS collected sufficient data to verify each aspect of EIC eligibility, (2) whether field agents and case reviewers knew how EIC rules were to be applied, and (3) whether the study procedures included controls designed to ensure consistency among cases. After reviewing study documentation and selected case files, we concluded that overall, IRS’ study methodology met these criteria. IRS’ data collection proceeded in two stages: initial taxpayer interviews and supplemental data collection in the field and a final review of complete case files at the Cincinnati Service Center. For each taxpayer in the study, IRS built a case file including transcripts of prior years’ returns, the tax year 1994 return, associated information reports (W-2s and 1099s), information on duplicate use of or invalid qualifying child SSNs, and dates of birth for filers and their children. Field agents were given initial case files containing data available at the time, written instructions on the information required to verify the claim, and checksheets to record findings. These instructions and checksheets covered all aspects of EIC eligibility. Field agents were required to contact, in person, the taxpayer; employers; the transmitter of the electronically filed return, if any; and the paid preparer, if any. If additional information was needed to verify the claim, the agents were to contact neighbors, schools, or state agencies as appropriate. This type of face-to-face contact with the taxpayer was necessary to verify the claim because eligibility of qualifying children is self-determined; and, other than SSNs, IRS does not have third-party information that can be used to verify the children’s eligibility. Data collection by the field agents was followed by a “best and final” review at the Cincinnati Service Center. These reviewers had access to additional information, primarily third-party income reports, that was not available at the time the field agents contacted the taxpayers. Using this additional information, the reviewers made final decisions regarding disposition of the claim. Most of the taxpayer interviews and other field data collection were done by IRS special agents from the Criminal Investigation Division or revenue agents from the Examination Division. The service center reviewers were Criminal Investigation Division tax examiners. On the basis of our prior work, we consider these staff generally to be adequately trained in audit techniques and how to apply EIC rules. IRS’ study methodology included controls designed to ensure consistency among cases. IRS used standard data collection checksheets and written instructions as one means to ensure consistent data collection. In addition, after completing an investigation, the field agents were instructed to call a study coordinator at IRS’ Cincinnati Service Center to discuss the case. This study coordinator was to review the findings to ensure completeness and consistency with other cases before the case was sent to the Cincinnati Service Center for final review. In spite of these controls, we found one consistency-related issue, regarding the corrected filing status for married taxpayers who erroneously filed as head of household, that may have systematically affected the study findings. This issue is discussed in more detail later. In reviewing IRS’ procedures for ensuring the quality of final study data used, we considered whether IRS (1) verified data entry into the final database; and (2) checked for internal consistency, outliers, and invalid codes. We found that OTA staff did a comprehensive review of the database to find and correct internal consistency and data transcription errors. This review of the data was necessary because IRS did not verify data entry or check for internal consistency within case records as the database was created. We verified data entry of key variables for the 122 cases included in our case file review and found no data errors. We used the OTA-corrected data for our analysis. IRS and OTA used the data to estimate the total amount of EIC overclaimed by the population represented by the sample of taxpayers filing EIC claims from January 15 to April 21, 1995. We replicated this analysis and arrived at the same totals. Data from the study are estimates based on the sample of EIC returns. To indicate the precision of these estimates, we calculated confidence intervals at the 95 percent confidence level. For example, as shown in table 2, IRS determined that taxpayers overclaimed a total of $4,448 million in EIC. The 95 percent confidence interval for this estimate is $412 million (– 9.3 percent of $4,448 million). This indicates that we are 95-percent confident that the actual overclaim amount is between $4,036 million and $4,860 million. The confidence interval for the 25.8 percent overclaim rate ranges from 23.4 percent to 28.2 percent. As shown in table 2, taxpayers with no qualifying children accounted for only a small portion of the EIC dollars claimed for tax year 1994 and $81 million of the overclaim total. The 95 percent confidence interval is $24 million to $138 million. Besides the overclaim estimate in table 2, IRS included in its report an estimate of total underclaims by taxpayers filing a tax year 1994 return with an EIC claim. That estimate was $293 million and has a confidence interval of $129 million to $457 million, or – 56.0 percent of the point estimate. The sample of taxpayers with underclaimed EIC is too small to allow IRS to make reliable estimates by number of qualifying children. Through our review of IRS’ study methodology, we identified two issues that affected the final study results. These issues are (1) apparent underrepresentation in the sample of claimants reporting self-employment income on a Schedule C and (2) apparent inconsistencies in correcting the filing status of married taxpayers who erroneously filed as head of household. Although we were not able to precisely quantify their net impact, our analysis showed that neither of these issues was large enough in scale to alter the major study findings. Filers who report self-employment income on a Schedule C appear to be underrepresented in the tax year 1994 EIC study sample. SOI data for all taxpayers who filed in 1995 show that 15.3 percent of EIC claimants reported Schedule C income. In contrast, 6.3 percent of claimants in the EIC compliance study filed a Schedule C. Self-employment income is often not subject to third-party information reporting; consequently, IRS has found that Schedule C filers in general are more likely to misreport their income than are taxpayers with wage income. A change in income, however, will often result in an incremental change in the EIC rather than a full denial. The impact of underrepresenting Schedule C filers in the tax year 1994 study is unknown and depends on how the filers left out of the sample might differ from those included. IRS’ ongoing compliance study of tax year 1997 EIC returns includes specific sampling of Schedule C returns and will sample through May; that study should be more representative of Schedule C EIC filers. Inconsistencies in determining the correct filing status for married taxpayers who erroneously filed as head of household also may have affected the final overclaim estimate. Taxpayers who use a filing status of married filing separately are ineligible for the EIC. Married taxpayers filing a joint return can claim the EIC if their joint income is within the eligible income range and they meet other qualifying criteria. When a taxpayer erroneously claimed head of household and was living with his or her spouse, the data collection instructions for the tax year 1994 study specifically directed field agents to use the filing status most advantageous to the taxpayer, usually married filing jointly (with appropriate changes to income, dependents, exemptions, etc.). We, and OTA staff who also reviewed case files, found instances in which field agents did not follow these instructions. The $4.4 billion overclaim estimate included $631 million accounted for by taxpayers whose filing status was changed by IRS to married filing separately in the absence of qualifying child errors and whose EIC was denied completely. It appears that some of these taxpayers may have been eligible for the EIC had IRS prepared a joint return; and, to the extent that their joint income would have allowed an EIC claim, the $631 million may be overstated. Although this filing status issue reduces the precision of the study findings, particularly in terms of identifying sources of noncompliance, we believe its impact to be relatively minor given the size of the total overclaim estimate. IRS data collection instructions for the tax year 1997 study state that field agents should attempt to obtain a copy of the spouse’s 1997 return to insert into the case file when filing status is changed to married filing jointly or married filing separately. However, the instructions did not specifically state that married filing jointly should be the presumptive filing status. The largest source of noncompliance found in the tax year 1994 study relates to the EIC requirements most difficult for IRS to verify—those related to the eligibility of qualifying children. As shown in figure 1, taxpayer returns with qualifying child errors accounted for at least 65 percent of the $4.4 billion in overclaims—56 percent from returns with qualifying child errors only and an additional 9 percent from returns with qualifying child errors made in conjunction with a filing status change to married filing separately. Claiming a child who did not meet residency requirements was the most common qualifying child error, and errors claiming head of household status often occurred with claims for nonqualifying children. Misreported income accounted for another 16 percent of the overclaim total; taxpayers whose filing status was changed to married filing separately, in the absence of qualifying child errors, accounted for most of the remainder. In order for a taxpayer to claim a qualifying child, the following rules applied for tax year 1994. 1. Relationship: The child must have been the taxpayer’s son, daughter, adopted child, grandchild, stepchild, or eligible foster child. A foster child is defined as any child cared for as the taxpayer’s own.2. Age: The child must have been under age 19, or under age 24 and a full-time student, or any age and permanently and totally disabled. 3. Residence: The child must have lived in the United States with the taxpayer for more than half of the year or the entire year for foster children.4. AGI tiebreaker: If a child meets the conditions to be a qualifying child of more than one person, only the person who had the highest AGI may treat that child as a qualifying child. This rule does not apply if the other person is the taxpayer’s spouse and they are filing a joint return. For example, if a child meets conditions to be a qualifying child for both a parent and grandparent who share a household and the grandparent has a higher AGI, the grandparent must claim the child. If the grandparent’s AGI exceeds the maximum income threshold, neither the parent nor the grandparent may claim the EIC for that child. As shown in table 3, qualifying child errors were involved in overclaims totaling $3.1 billion. About $1.7 billion of that amount involved qualifying children who did not meet the residency test, either alone or in combination with a failure to meet the relationship test. Failure to apply the AGI tiebreaker rules added an additional $782 million in overclaims. Together, these two types of qualifying child errors accounted for about half of the $4.4 billion overclaim total. As noted earlier, however, IRS’ study did not offset overclaims by claims that could have been made by other taxpayers. For example, in AGI tiebreaker cases, it is possible that the taxpayer with the higher AGI might have been able to claim an EIC. It is also possible in residency cases that a taxpayer in the household where the child actually lives could make an EIC claim for the child in question. The extent to which AGI tiebreaker and residency cases involved an EIC claim that could have been made by another taxpayer, but was not, is unknown. Filing status per se does not affect either EIC eligibility or credit amounts, except for married taxpayers filing separate returns who are ineligible for the EIC. As shown in table 4, however, head of household errors occurred on returns accounting for $3.4 billion in overclaims, or about three-quarters of the $4.4 billion in overclaims on all returns. For taxpayers whose filing status was changed to single, qualifying child errors accounted for most of the overclaims. For taxpayers whose filing status was changed to married filing jointly, most of the overclaims were attributed to income errors. Among taxpayers whose EIC was denied because their filing status was changed to married filing separately, about 40 percent of the overclaim amounts were also associated with qualifying child errors. Among all taxpayers who filed as head of household for tax year 1994, regardless of final filing status, male taxpayers had an overclaim rate nearly twice that of female taxpayers. Of $3.2 billion in EIC claims by male head of household filers, $1.7 billion, or about 51 percent, was overclaimed. In contrast, female head of household filers overclaimed $2.0 billion, or about 25 percent, of $8.2 billion in EIC they claimed. Errors in reporting income, with no other eligibility errors, accounted for $708 million in EIC overclaims, or 16 percent of total overclaims. Included in this group are taxpayers who (1) used the correct filing status but misreported their income or (2) were married and erroneously filed as head of household or single and whose filing status was changed to married filing jointly. The filing status error, per se, had no impact on the EIC; however, when IRS changed the filing status to married filing jointly and modified the taxpayers’ returns to include the correct combined income for both parties, the EIC was often reduced or denied completely. These adjustments accounted for about $309 million of the income-related overclaims. In general, about half of EIC claimants use a return preparer rather than completing the return themselves. Using codes developed by OTA, we grouped prepared returns into the following three categories: those prepared by (1) “formal preparers,” which includes attorneys, Certified Public Accountants, national tax preparation companies, and enrolled agents; (2) “IRS preparers,” which includes staff at IRS walk-in sites and at IRS-supported volunteer organizations like Volunteer Income Tax Assistance and Tax Counseling for the Elderly; and (3) “local or informal preparers,” which includes anyone not in the other two categories. The study data show that there was little difference in EIC noncompliance between self-prepared returns and those done by preparers. Both groups had overclaim rates of about 26 percent. A more detailed analysis, however, shows that overclaim rates varied by type of preparer. As shown in figure 2, the rate on returns prepared by local or informal preparers was 31 percent; the overclaim rate on returns prepared by formal preparers was 20 percent. The sample included too few IRS-prepared returns to allow us to make a reliable overclaim estimate for that group. Knowing the extent to which EIC overclaims are due to honest mistakes versus intentional misstatements is important in targeting compliance approaches. If, for example, errors are due to a misunderstanding of EIC rules, taxpayer education and assistance efforts would be warranted. Taxpayers intentionally misclaiming the EIC require different approaches. As part of the tax year 1994 study, IRS made a determination of taxpayer intent. Both field agents and Cincinnati Service Center case reviewers were to classify taxpayers’ errors as intentional (e.g., the taxpayer knew that a child did not meet EIC qualifying child tests); or unintentional (e.g., the taxpayer did not understand the eligibility rules or EIC instructions). We found that field agents had not made determinations of intent in about 40 percent of the final overclaim cases. In almost all of these instances, however, Cincinnati reviewers made a determination of intent as part of their best and final review. Based on best and final case data, about one-half of the 4.7 million returns with an EIC overclaim and two-thirds of the total amount overclaimed were considered to be the result of intentional errors. These assessments are judgmental in nature and should not be considered precise measures of intentional and unintentional taxpayer errors. However, the results do indicate that IRS’ compliance efforts should include activities aimed at taxpayers who intentionally misclaim the EIC. Examiners working tax year 1997 compliance study cases are to collect data related to taxpayer intent. The data collection checksheet for that study includes a question asking examiners to decide if errors were due to complexity of the tax form, difficulty understanding the law, a computational error, a potential fraud scheme, or some other reason. This provides more specific choices, particularly for unintentional error, but the determinations of intent will still be judgmental. With new enforcement tools provided by Congress and an increase in funding specifically designated for EIC-related activities, IRS began implementing in fiscal year 1998 a plan that calls for attacking EIC noncompliance through expanded customer service and public outreach, strengthened enforcement, and enhanced research. Together, these activities constitute what we refer to as the “EIC compliance initiative.” Many parts of that initiative are targeted at the major sources of EIC noncompliance discussed in the prior section. However, in reviewing IRS’ efforts for tax year 1997, we identified several implementation issues that could diminish the initiative’s impact. As we have previously testified before Congress, IRS’ ability to reduce EIC noncompliance is limited by the design of the credit. Unlike income transfer programs such as Temporary Assistance for Needy Families and Food Stamps, the EIC is designed to be administered through the tax system rather than through other state or federal agencies. This choice generally should result in lower administrative costs and higher participation rates and emphasizes that the credit is for working taxpayers. The trade-off, however, is higher noncompliance. EIC eligibility, particularly related to qualifying children, is difficult for IRS to verify through traditional enforcement procedures, such as matching return data to third-party information reports. Correctly applying the residency test and AGI tiebreaker rules, for example, often involves understanding complex living arrangements and child custody issues. Organizations that administer programs like Food Stamps are set up to investigate and verify this type of eligibility before payment is made; IRS is not. Thoroughly verifying qualifying child eligibility basically requires IRS to do an audit of the type done in the EIC compliance studies—a costly, time-consuming, and intrusive proposition. IRS has designed some compliance efforts to reduce qualifying child noncompliance but cannot fully address a significant root cause—design of the EIC itself. Most of the efforts that make up the EIC compliance initiative had not progressed far enough at the time we completed our audit for us to make any judgment about their effectiveness. IRS plans to measure the overall impact of its compliance initiative on the EIC overclaim rate through annual studies of EIC compliance starting with a baseline study of tax year 1997 returns. However, the 5-year initiative could be into its fourth year before IRS has tax year 1997 and 1998 study data to compare in assessing the initiative’s results. That would be too late for IRS to identify and implement meaningful adjustments to the initiative. IRS plans to measure the results of individual programs implemented in 1998, but some of these results will not be available for planning fiscal year 1999 activities. Upon release of IRS’ April 1997 report on the results of its tax year 1994 EIC compliance study, the Department of the Treasury announced six legislative proposals directed at reducing EIC noncompliance. Congress included four of the six proposals in the Taxpayer Relief Act of 1997 (TRA97). Specifically, these provisions (1) require paid preparers to fulfill certain due diligence standards when preparing EIC claims for taxpayers; (2) provide that taxpayers who fraudulently claim the EIC can be denied the credit for 10 years, and those who recklessly or intentionally disregard the rules and regulations can be denied the credit for 2 years; (3) provide that taxpayers who are denied the EIC through IRS’ deficiency proceduresare ineligible to claim the EIC in subsequent years unless they provide evidence of their eligibility through a recertification process; and (4) allow IRS to levy up to 15 percent of unemployment and means-tested public assistance and certain other specified payments. In addition, TRA97 included provisions that (1) give IRS access to the Department of Health and Human Service’s (HHS) Federal Case Registry of Child Support Orders, a federal database compiling state information on child support payments that could help IRS identify erroneous EIC claims by noncustodial parents; and (2) require the Social Security Administration (SSA) to collect SSNs of birth parents and provide IRS with information linking the parents’ and child’s SSNs. Besides the new enforcement tools provided in TRA97, Congress began funding the EIC compliance initiative. For fiscal year 1998, the first year of what is to be a 5-year effort, Congress appropriated $138 million. For the second year (fiscal year 1999), IRS has requested $143 million. Funding over the full 5 years is expected to total $716 million. IRS is using the compliance initiative funds to expand existing EIC-related activities and to initiate several new efforts, including implementation of the TRA97 provisions. The various activities being funded as part of the EIC compliance initiative in fiscal year 1998 fall into three broad categories: (1) customer service and public outreach, (2) enforcement, and (3) compliance research. Primary efforts in each of those categories are listed in table 5. As indicated in table 5 and discussed in more detail below, several components of the EIC compliance initiative are directed at issues that were identified by the tax year 1994 EIC compliance study as major sources of EIC errors. To the extent that EIC errors, whether they involve qualifying children requirements, filing status, or misreported income, are unintentional and due to a misunderstanding of the rules, IRS’ customer service and outreach efforts may help improve compliance. IRS data show that many taxpayers took advantage of the expanded customer service IRS offered in 1998. For example, IRS expanded telephone access for EIC-related issues to 7 days a week, 24 hours a day. According to IRS data, 95,000 taxpayers called the EIC assistance lines during the times when IRS’ other assistance lines were not available. In addition, IRS provided Saturday walk-in assistance at between 152 and 173 sites from March 7 through April 11, 1998. IRS data show that staff available on these 6 Saturdays helped 2,949 taxpayers prepare their EIC returns and provided 1,032 others with different types of EIC-related assistance. According to IRS, this is in addition to 185,305 EIC taxpayers assisted on weekdays during the filing season. Some choices IRS made in implementing its assistance and outreach efforts in 1998, however, limited the number of persons who might have benefited. For example: IRS did not offer Saturday walk-in assistance until March 7, by which time millions of EIC claims had already been filed. IRS reported that it had received about 7.4 million EIC claims as of February 21, 1998—2 weeks before the first Saturday that walk-in help was available. EIC Awareness and Problem Prevention days were held even later in the filing season. IRS said that it did not offer Saturday service earlier in the year because “prior to receiving the [EIC] appropriation, we had anticipated having Saturday service for only the last six weeks of the filing season” when, according to IRS officials, demand among all filers is generally higher. The date for the EIC Awareness Day was selected so that IRS would have adequate time to publicize and provide for quality service to the public. IRS officials said, in retrospect, it could have been more effective if scheduled earlier. IRS did not advertise the 24-hour availability of telephone assistance for EIC-related issues. IRS informed taxpayers of this service only if they received a notice from IRS about a problem with the EIC claims on their tax returns. IRS officials told us that they did not advertise this service because they thought that it would lead to many non-EIC calls during the hours when other assistance lines were closed. As noted earlier, TRA97 included provisions that allow IRS to deny future EIC claims. These provisions are to be implemented in 1999, based on returns filed in 1998. For example, persons found to have intentionally disregarded the rules and regulations in filing their EIC claims in 1998 can be denied the credit for the following 2 years. IRS attempted to warn taxpayers about the implication of these provisions before they filed their returns in 1998. Those outreach efforts were intended to create a deterrent effect by providing an incentive for intentionally noncompliant taxpayers to file a correct return and for other taxpayers to be sure that they understand the EIC rules before filing. To the extent that result was achieved, the number of EIC errors may have been reduced. Although we have no way of knowing how successful those warnings were in encouraging better compliance, we believe that the chances for success might have been enhanced if IRS had done a better job of publicizing those warnings. In that regard, IRS’ income tax return instructions did not alert taxpayers as clearly as they could have about the TRA97 provisions and their implications. The tax year 1997 Form 1040 tax package included the following statement in its general information on “what’s new for 1997”: “Caution: If it is determined that you are not entitled to the EIC you claim, you may not be allowed to take the credit for certain future years. See [Publication] 596 for details.” A reference to this caution was not included later in the package either with the instructions for filling in the EIC line item on the tax return, the EIC worksheets, or the Schedule EIC that taxpayers must submit with their returns to substantiate their EIC claims. Thus, IRS was relying on taxpayers to read the general information in the front of the tax package before preparing their returns and, assuming they did, to order Publication 596 for details. For tax year 1996, about 19 million taxpayers claimed the EIC and IRS distributed about 636,000 copies of Publication 596. We believe that potential EIC claimants would have been more likely to read the relevant information from Publication 596 if it had been included in the Form 1040 instructions, along with statements in the EIC-specific parts of those instructions that clearly alerted taxpayers to the existence of that warning and where to find it. The customer service and outreach efforts discussed above are generally broad based and not targeted to specific sources of EIC noncompliance. In contrast, IRS’ compliance initiative includes several enforcement and research activities that are specifically targeted on issues relating to qualifying children, the head of household filing status, noncompliant return preparers, and misreported income. Qualifying child errors. IRS’ tax year 1994 EIC compliance study showed that qualifying child errors associated with the residency requirement and AGI tiebreaker rules accounted for about half of the $4.4 billion EIC overclaim total and 1.8 million of the 2.3 million returns with a qualifying child error. These errors undoubtedly included both unintentional mistakes and intentional noncompliance and involved a variety of complex living situations. IRS is able to verify some EIC eligibility criteria using tax return or Schedule EIC information and does so through its math error program as returns are submitted. IRS receives few indicators, however, of other problematic eligibility requirements, such as qualifying child residency or the presence of another taxpayer in the household who should be claiming the child. IRS has targeted its enforcement efforts on those compliance problems that can be identified from tax return information or profiles of noncompliant returns and is able to resolve some eligibility issues through correspondence audits. However, the bulk of noncompliance, primarily related to qualifying children, can best be identified through face-to-face audits. One component of the compliance initiative that combines elements of customer outreach and enforcement is targeted on cases where a qualifying child’s SSN is used on more than one tax return for the same tax year. Because a qualifying child can be claimed only once, resolution of these duplicate SSN cases should eliminate EIC claims by taxpayers with whom the child did not reside. For the outreach portion of this effort, IRS identified about 225,000 qualifying child SSNs that had been used by more than one taxpayer on tax year 1996 returns. In December 1997, IRS sent taxpayers using these SSNs (about 383,000 taxpayers) a notice informing them of the problem and reminding them to file a correct return for tax year 1997. To evaluate the effectiveness of these notices, IRS plans to check for duplicate use of these qualifying child SSNs on tax year 1997 returns. According to IRS, it plans to begin its evaluation in September 1998 and report the results in February 1999. For the compliance portion of this effort, IRS allocated additional staff to audit as many as 140,000 taxpayers who had used about 92,000 duplicate qualifying child SSNs in both tax years 1995 and 1996. According to IRS officials, as of May 16, 1998, about 103,000 of the 140,000 taxpayers had filed tax year 1997 returns, and IRS had frozen their refunds. Also as of May 16, 1998, however, IRS had released 49,000 of the refunds for taxpayers who had corresponded with IRS but whose conflicting claims for the child(ren) in question were not resolved. In discussing the release of these refunds, IRS officials told us that it could not process the amount of correspondence received because IRS (1) did not have enough time to adequately prepare for the start of this project (e.g., get staff assigned, procedures developed, and training done); and (2) had underestimated the volume of taxpayer contacts it would receive. Although IRS is continuing to investigate these cases, its effectiveness in protecting the revenue has been compromised because it is more difficult (and more costly) to recoup an erroneous refund once it has been released. IRS officials told us that meaningful data on the results of this effort would not be available until September 1998. Another way that IRS attempts to deal with qualifying child errors is to deny EIC claims when the taxpayer has failed to provide valid SSNs for the listed children. This effort, which is part of IRS’ math error program, began before the compliance initiative and has continued as part of the initiative. As of June 4, 1998, IRS had sent about 535,000 EIC SSN-related math error notices to tax year 1997 filers; at the same point in 1997, IRS had sent about 774,000 such notices to tax year 1996 filers. IRS data for all of tax year 1996 show that it stopped approximately $876 million in erroneous refunds through the EIC SSN math error program. As of March 1998, IRS data show that it stopped about $414 million in tax year 1997 refunds. IRS expected to issue fewer SSN math error notices in 1998 because IRS, before the 1998 filing season, had sent notices to about 600,000 taxpayers with known SSN problems telling them what to do to correct the situation before filing their tax year 1997 returns. TRA97 included provisions giving IRS access to an SSA data file linking parent and child SSNs and a Federal Case Registry of Child Support Orders to be administered by HHS. The Federal Case Registry is to be a compilation of state child support and custody data. Access to both data files is intended to augment IRS’ ability to detect EIC claims for nonqualifying children. Both, however, are still in development, and IRS plans to do a “feasibility analysis” regarding their use. However, it will be several years before IRS will be able to use these data. Access, in terms of the specific data fields IRS can obtain, is still a major issue to be resolved among the three agencies. In addition to access issues, IRS’ feasibility analysis is to include an assessment of data accuracy, currency, and completeness—factors that will be especially important for the custodial data to be useful. Filing status errors. IRS’ tax year 1994 study showed that a large proportion of qualifying child errors occurred in tandem with erroneous claims of head of household status. One of the components of the EIC compliance initiative involves increased staffing to expand a project aimed at a universe of about 345,000 head of household EIC claimants whose returns contain other indicators of potential qualifying child problems. This project was initiated in 1997 with audits of about 53,000 returns and expanded in 1998 to 313,000 returns. As of March 1998, about 50,700 of the 53,000 audits begun in 1997 had been closed; and about 43,400 of those closures (86 percent) resulted in tax changes totaling about $107 million. On the basis of those results, IRS expects that about 85 to 90 percent of the 1998 audits will result in a change to the EIC claim. According to IRS, results of these audits will not be available until late 1998 or early 1999. IRS officials estimated that about 25 percent of the 313,000 audits will be completed by September 30, 1998. Errors involving misreported income. Misreported income accounted for about 16 percent of the total EIC overclaims identified in IRS’ tax year 1994 EIC compliance study. Many of IRS’ traditional compliance activities are designed to identify returns with misreported income. For example, EIC returns are subject to IRS’ document matching program, which compares W-2 wage reports and other income information reports (e.g., those filed on Form 1099) with income reported on tax returns. Because misreported income is of particular concern within that segment of the population that reports self-employment income on Schedule C, the EIC compliance initiative includes a study of noncompliance among EIC claimants who report self-employment income. IRS selected a sample of tax year 1997 returns, held the refunds, and plans to complete the audits by September 1998. IRS plans to issue a report of its findings in February 1999. Paid preparer noncompliance. The tax year 1994 study data showed that returns prepared by local or informal preparers had a higher overclaim rate (31 percent) than the returns prepared by formal preparers (20 percent). To address preparer noncompliance, TRA97 imposed due diligence requirements on paid preparers who complete EIC returns and fines for preparers who fail to comply with those requirements. In December 1997, IRS issued specific due diligence requirements and publicized these requirements in mailings to practitioners. IRS did not institute at a national level specific procedures to monitor compliance with the due diligence requirements during the 1998 filing season. However, individual field offices may have done some monitoring. At the Northern California District Office, for example, we were told that staff phoned about 560 preparers in the district to inform them of the due diligence requirements and inquired into conformity with those requirements as part of the district’s normal monitoring visits to about 100 preparers. IRS informed us that national-level plans for the 1999 filing season include due diligence monitoring visits to EIC return preparers, but IRS has not decided on the procedures for these visits, the number of visits, or the extent to which they will target those preparers most likely to be noncompliant (i.e., local or informal preparers). As part of the EIC compliance initiative, IRS also planned to increase district office Criminal Investigation staffing in fiscal year 1998 to investigate potential EIC fraud cases, including cases involving return preparers. The increased staffing was to include a total of 40 special agents and 10 investigative aides. For fiscal year 1998 as of May 31, 1998, 31 paid preparer cases have been opened compared to 44 for all of fiscal year 1997. Fraud detection. IRS’ QRP is aimed at identifying tax returns with potentially fraudulent refund claims. The scoring system used to identify these returns is based on known characteristics of potentially fraudulent returns. As part of the compliance initiative, IRS expanded QRP staffing to allow screening of 1.3 million more returns in fiscal year 1998 than in fiscal year 1997, for a total of 4 million returns. According to IRS, as of April 30, 1998, QRP teams had scanned about 2.3 million potentially fraudulent EIC returns and had identified 6,476 returns with erroneous EIC claims totaling $17.6 million. As is evident from our discussion of the various elements of the EIC compliance initiative, there was little information available at the time we completed our audit work on the results of IRS’ efforts and thus little basis for us or IRS to assess their effectiveness. Such data and assessments are crucial as IRS decides on the compliance initiative’s future direction. An obvious question one would ask in assessing IRS’ results is “how much has the EIC overclaim rate changed since the start of the initiative?” Although the results of the tax year 1994 EIC compliance study were the catalyst behind congressional funding of the compliance initiative, IRS does not plan to use those results as the baseline for measuring the initiative’s impact on the EIC overclaim rate. Instead, it plans to measure the initiative’s impact against the results of a tax year 1997 compliance study, which IRS has begun as part of the initiative. However, by the time IRS completes the tax year 1997 study, which is to become the baseline, and a tax year 1998 study that can be compared with the baseline to measure change, IRS will be in the fourth year of its 5-year initiative. IRS’ time frame for the tax year 1997 baseline study shows that the analysis will not be completed until fiscal year 2000. If the tax year 1998 study follows the same schedule, its results will not be available until fiscal year 2001—the fourth year of the initiative. It will be too late at that point to make substantive changes to the initiative. Given the time frames associated with the broad compliance studies, it is important that IRS closely monitor the results of the initiative’s individual components so that it can make more timely and better informed decisions about revising, deleting, or expanding those various components. For example, information on the results of the notices IRS sent users of duplicate SSNs in December 1997 would be useful in deciding whether to send similar notices in December 1998. As noted earlier, however, IRS does not plan to begin such an assessment until September 1998 and does not expect to have final results until well after December 1998. Although minor methodological problems in IRS’ tax year 1994 EIC compliance study could have led to some over- or understatement of total EIC overclaims, these issues do not affect the relevance of the study’s findings. The study demonstrates that EIC noncompliance is a significant issue and that verifying qualifying child eligibility lies at the heart of EIC compliance problems. Targeting compliance efforts at qualifying child errors, however, presents IRS with a major challenge. IRS is not set up to systematically verify qualifying child eligibility. Doing so would basically require IRS to establish a process to verify eligibility before issuing a refund, similar to the processes used in EIC compliance audits. IRS’ EIC compliance initiative includes a broad array of customer service, enforcement, and research activities aimed at reducing noncompliance. Some parts, like special audits of head of household claimants and preparer due diligence requirements, are targeted specifically at areas of noncompliance identified in the tax year 1994 study. Others, like expanded walk-in and telephone assistance, are more broadly based efforts aimed at improving taxpayers’ understanding of EIC rules. Although it is too early to judge the initiative’s effect on noncompliance, we did identify some opportunities for IRS to improve future implementation efforts. For example, IRS did not offer Saturday walk-in assistance until late in the filing season when millions of EIC claims had already been filed. Also, for the TRA97 provisions allowing IRS to deny future EIC claims to act as a deterrent, taxpayers must be aware of the circumstances under which these penalties will be applied. IRS’ income tax return instructions, however, did not alert taxpayers as clearly as they could have about these provisions and their implications. IRS plans to do annual studies to measure the impact of its EIC compliance initiative on the overclaim rate. Based on IRS’ time frame for these studies, however, useful data on impact will not be available until fiscal year 2001—the fourth year of the initiative. It will be too late at that point to make substantive changes. Absent timely data on the overall impact of IRS’ efforts and given the need for IRS to ensure that available resources are used as effectively and efficiently as possible, it is important that IRS have immediate information on the results of individual components of that initiative. Evaluation plans that are not timed to provide data when data are most needed, as appears to be the case with IRS’ planned evaluation of the notices on duplicate SSNs, are of limited value. We recommend that the Commissioner of Internal Revenue ensure that customer service efforts aimed at EIC claimants be available earlier in the filing season when most EIC claims are filed, and include prominent information regarding the 2-year and 10-year sanctions and the recertification process in the Form 1040 EIC instructions and Schedule EIC. In addition, to provide a basis for continually improving and refocusing EIC compliance efforts, we recommend that the Commissioner develop evaluation plans for each compliance initiative component that will provide, in succeeding years of the initiative, timely data for decisionmakers. We requested comments on a draft of this report from the Commissioner of Internal Revenue and the Secretary of the Treasury, or their designees. The Commissioner of Internal Revenue responded in a July 2, 1998, letter generally agreeing with our recommendations (see app. I). On July 1, 1998, we met with IRS officials, including the Deputy Chief of Operations, the Acting Assistant Commissioner for Customer Service, and the Assistant Commissioner for Research/Statistics of Income, to discuss the Commissioner’s comments. Treasury’s Deputy Assistant Secretary (Tax Analysis) responded in a June 29, 1998, letter (see app. II). In response to our recommendation that IRS provide customer service efforts earlier in the filing season, IRS said that it plans to publicize EIC awareness events early in the 1999 filing season and to hold EIC awareness activities beginning in January 1999. IRS officials told us that (1) Saturday service at walk-in assistance sites during the 1999 filing season will begin on January 16 and continue through the filing season, and (2) the first 6 Saturdays will be publicized as EIC help days. These actions, if effectively implemented, will be responsive to our recommendation. In response to our second recommendation about more prominently displaying information on the 2-year and 10-year sanctions and the recertification process, IRS said that it will include such information in the tax year 1998 Schedule EIC instructions but will not revise the schedule itself. IRS said that it did not believe the Schedule EIC should be revised to address these issues because the issues do not affect the majority of filers and providing the information on the schedule may confuse filers who have not had their EIC claim disallowed. According to IRS officials, taxpayers must go to the worksheet in the instructions to complete the schedule, and IRS’ intent is to place the information so that persons using the worksheet will easily see it. Although inclusion of the information in the Schedule EIC instructions is an improvement, we continue to believe that something should also be added to the schedule. Because one of the purposes of this information is to alert potential EIC claimants to possible repercussions if they make erroneous claims, the information affects all filers. Also, although it is true that taxpayers who choose to compute their own EIC have to use the worksheet in the instructions, taxpayers who choose to have their returns prepared by someone else do not have to use the worksheet and thus would see only the Schedule EIC. We are not suggesting that all of the information on sanctions and recertification be included on the schedule. What we are suggesting is that a brief, but prominent, cautionary statement be added to the schedule alerting users to important information in the instructions that they should read before filing their returns. Regarding our final recommendation, IRS said that it understood our concern regarding more timely delivery of research data for decisionmaking. According to IRS, it has developed an information delivery strategy that includes developing information systems that will allow more timely delivery of both interim and final tax return, audit, and research data. The strategy includes using interim reports to disseminate preliminary findings from various EIC projects. For example, IRS officials said that they hope to have, in October 1998, some preliminary findings from audits of taxpayers who had used duplicate qualifying child SSNs. IRS also noted that using interim data of this sort has limitations; it may not be adequate to measure revenue or provide a full understanding of taxpayer behavior. Although there are certain limitations associated with interim data, we believe, as IRS recognized in its comments, that such data can be of value to decisionmakers. Treasury’s letter addressed two statements in the draft report—our characterization of the EIC as an income transfer program and our statement that IRS cannot address a significant root cause of noncompliance. In response to our statement comparing the EIC to other income transfer programs, Treasury said that “unlike income transfer programs, the [EIC] makes work pay by reducing tax liabilities,” and that about 80 percent of the EIC’s total costs offset individual income, Social Security, and other federal taxes. We clarified our reference to income transfer programs where appropriate. In response to our statement in a draft of this report that IRS cannot address a significant root cause of noncompliance (IRS’ difficulty verifying qualifying child eligibility), Treasury said that issues of verifying family relationships and living arrangements are not unique to the EIC but also affect taxpayers’ eligibility for dependency exemptions, filing status, the child credit, and the child and dependent care tax credit. Also, both IRS and Treasury said that they were hopeful that access to new data (an HHS registry of child support orders and SSA data linking parent and child SSNs) will allow IRS to detect some qualifying child problems during return processing. We modified our statement and related discussions in the report to acknowledge IRS’ ability to identify some noncompliance related to qualifying children. Our report also recognizes the provision in TRA97 giving access to the HHS and SSA databases. However, IRS told us that it will not be testing use of these databases until late 1999 or 2000 and that the amount of information that can be initially expected is small. As agreed with your offices, unless you publicly release its contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. At that time, we will send copies to the Ranking Minority Member, Committee on Ways and Means; the Chairmen and Ranking Minority Members of other interested congressional committees; the Secretary of the Treasury; the Commissioner of Internal Revenue; and other interested parties. Major contributors to this report are listed in appendix III. If you or your staffs have any questions, please call me on (202) 512-9110. Earned Income Credit: Noncompliance Relative to Other Components of the Income Tax Gap (GAO/GGD-97-120R, June 13, 1997). Earned Income Credit: Claimants’ Credit Participation and Income Patterns, Tax Years 1990 Through 1994 (GAO/GGD-97-69, May 16, 1997). Tax Administration: Earned Income Credit Noncompliance (GAO/T-GGD-97-105, May 8, 1997). Earned Income Credit: IRS’ 1995 Controls Stopped Some Noncompliance, But Not Without Problems (GAO/GGD-96-172, Sept. 18, 1996). Earned Income Credit: Profile of Tax Year 1994 Credit Recipients (GAO/GGD-96-122BR, June 13, 1996). Earned Income Credit: Noncompliance and Potential Eligibility Revisions (GAO/T-GGD-95-179, June 8, 1995). Earned Income Credit: Targeting to the Working Poor (GAO/GGD-95-122BR, Mar. 31, 1995). Earned Income Credit: Targeting to the Working Poor (GAO/T-GGD-95-136, Apr. 4, 1995). Tax Administration: Earned Income Credit—Data on Noncompliance and Illegal Alien Recipients (GAO/GGD-95-27, Oct. 25, 1994). Tax Policy: Earned Income Tax Credit: Design and Administration Could Be Improved (GAO/GGD-93-145, Sept. 24, 1993). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Internal Revenue Service's (IRS) 1994 Earned Income Credit (EIC) compliance study, focusing on: (1) evaluating IRS' study methodology to determine if the reported results were reasonably accurate; (2) identifying the primary sources of EIC noncompliance found in the study; and (3) determining whether recent IRS compliance efforts are designed to address the primary sources of noncompliance. GAO noted that: (1) IRS' estimate of $4.4 billion in EIC overclaims has a 95-percent confidence interval of $4 billion to $4.9 billion; (2) GAO's evaluation of the study methodology showed that the estimate is reasonably accurate and representative of EIC claimants filing between January 15 and April 21, 1995; (3) some aspects of the study methodology affected the precision of the results; but, given the scale of the findings, these limitations do not affect the study's message or its usefulness in designing compliance approaches; (4) although it is a reasonable estimate of EIC overclaims, the entire $4.4 billion should not be viewed as a potential savings to the government had IRS somehow been able to prevent or correct all of these errors; (5) for returns filed with an EIC claim, the tax year 1994 study was designed to evaluate taxpayers' compliance with each EIC eligibility filing requirement, to produce an overall estimate of EIC amounts claimed in error, and to identify the sources of these errors; (6) the study was not designed to detect or quantify EIC claims that taxpayers could have made; (7) the largest source of taxpayer error identified by the tax year 1994 study relates to EIC requirements that are difficult for IRS to verify--those related to qualifying children; (8) unlike income transfer programs, the EIC was designed to be administered through the tax system; (9) this choice generally should result in lower administrative costs and higher participation rates and emphasizes that the credit is for working taxpayers; (10) EIC eligibility is difficult for IRS to verify through its traditional enforcement procedures; (11) thoroughly verifying qualifying child eligibility requires IRS to do an audit of the type done in the EIC compliance studies; (12) with new enforcement tools provided by Congress and an increase in funding designated for EIC-related activities, IRS began implementing in fiscal year 1998 a plan that, over a period of 5 years, calls for attacking EIC noncompliance; (13) most of the efforts that make up the EIC compliance initiative had not progressed far enough at the time GAO completed its audit for it to make any judgment about their effectiveness; (14) IRS plans to measure the overall impact of the compliance initiative on the overclaim rate through annual studies of EIC compliance starting with a baseline study of tax year 1997 returns; and (15) IRS plans to measure the results of the individual initiative components implemented in 1998.
Our recent analyses of audit results for federal agencies showed improvement, but continued to show significant weaknesses in federal computer systems that put critical operations and assets at risk of inadvertent or deliberate misuse, financial information at risk of unauthorized modification or destruction, sensitive information at risk of inappropriate disclosure, and critical operations at risk of disruption. The significance of these weaknesses led GAO to recently conclude that information security was a material weakness in our audit of the federal government’s fiscal year 2003 financial statements. Audits also identified instances of similar types of weaknesses in non-financial systems, which continue to receive increased audit coverage in response to FISMA requirements. Weaknesses continued to be reported in each of the six major areas of general controls—the policies, procedures, and technical controls that apply to all or a large segment of an entity’s information systems and help ensure their proper operation. These six areas are (1) security program management, a principal focus of FISMA, which provides the framework for ensuring that risks are understood and that effective controls are selected and properly implemented; (2) access controls, which ensure that only authorized individuals can read, alter, or delete data; (3) software development and change controls, which ensure that only authorized software programs are implemented; (4) segregation of duties, which reduces the risk that one individual can independently perform inappropriate actions without detection; (5) operating systems controls, which protect sensitive programs that support multiple applications from tampering and misuse; and (6) service continuity, also addressed by FISMA, which ensures that computer-dependent operations experience no significant disruptions. To fully understand the significance of the weaknesses we identified, it is necessary to link them to the risks they present to federal operations and assets. Virtually all federal operations are supported by automated systems and electronic data, and agencies would find it difficult, if not impossible, to carry out their missions and account for their resources without these information assets. Hence, the degree of risk caused by security weaknesses is extremely high. The weaknesses identified place a broad array of federal operations and assets at risk. For example, resources, such as federal payments and collections, could be lost or stolen; computer resources could be used for unauthorized purposes or to launch sensitive information, such as taxpayer data, social security records, medical records, and proprietary business information, could be inappropriately disclosed, browsed, or copied for purposes of espionage or other types of crime; critical operations, such as those supporting national defense and emergency services, could be disrupted; data could be modified or destroyed for purposes of fraud or disruption; agency missions could be undermined by embarrassing incidents that result in diminished confidence in their ability to conduct operations and fulfill their fiduciary responsibilities. Congress and the administration have established specific information security requirements in both law and policy to help protect the information and information systems that support these critical operations. On October 30, 2000, Congress passed GISRA, which was signed into law and became effective November 29, 2000, for a period of 2 years. GISRA supplemented information security requirements established in the Computer Security Act of 1987, the Paperwork Reduction Act of 1995, and the Clinger-Cohen Act of 1996 and was consistent with existing information security guidance issued by OMB and NIST, as well as audit and best practice guidance issued by GAO. Most importantly, however, GISRA consolidated these separate requirements and guidance into an overall framework for managing information security and established new annual review, independent evaluation, and reporting requirements to help ensure agency implementation and both OMB and congressional oversight. Enacted into law on December 17, 2002, as title III of the E-Government Act of 2002, FISMA permanently authorized and strengthened GISRA’s information security program, evaluation, and reporting requirements. Like GISRA, FISMA assigns specific responsibilities to agency heads, chief information officers (CIO), and IGs. It also assigns responsibilities to OMB, which include developing and overseeing the implementation of policies, principles, standards, and guidelines on information security; and reviewing at least annually, and approving or disapproving, agency information security programs. FISMA continues to delegate OMB responsibilities for national security systems to the Secretary of Defense and the Director of Central Intelligence. Overall, FISMA requires each agency, including agencies with national security systems, to develop, document, and implement an agencywide information security program to provide information security for the information and information systems that support the operations and assets of the agency, including those provided or managed by another agency, contractor, or other source. Specifically, this program is to include periodic assessments of the risk and magnitude of harm that could result from the unauthorized access, use, disclosure, disruption, modification, or destruction of information or information systems; risk-based policies and procedures that cost-effectively reduce information security risks to an acceptable level and ensure that information security is addressed throughout the life cycle of each information system; subordinate plans for providing adequate information security for networks, facilities, and systems or groups of information systems; security awareness training for agency personnel, including contractors and other users of information systems that support the operations and assets of the agency; periodic testing and evaluation of the effectiveness of information security policies, procedures, and practices, performed with a frequency depending on risk, but no less than annually, and that includes testing of management, operational, and technical controls for every system identified in the agency’s required inventory of major information systems; a process for planning, implementing, evaluating, and documenting remedial action to address any deficiencies in the information security policies, procedures, and practices of the agency; procedures for detecting, reporting, and responding to security incidents; plans and procedures to ensure continuity of operations for information systems that support the operations and assets of the agency. FISMA also established a requirement that each agency develop, maintain, and annually update an inventory of major information systems (including major national security systems) operated by the agency or under its control. This inventory is to include an identification of the interfaces between each system and all other systems or networks, including those not operated by or under the control of the agency. The law also requires an agency’s CIO to designate a senior agency information security officer who, for the agency’s FISMA-prescribed information security responsibilities, shall carry out the CIO’s responsibilities; possess professional qualifications, including training and experience, required to administer the required functions; have information security duties as that official’s primary duty; and head an office with the mission and resources to assist in ensuring agency compliance. Under FISMA, each agency must continue to have an annual independent evaluation of its information security program and practices, including control testing and compliance assessment. Evaluations of non-national- security systems are to be performed by the agency IG or by an independent external auditor, while evaluations related to national security systems are to be performed only by an entity designated by the agency head. FISMA requires each agency to report annually to OMB, selected congressional committees, and the Comptroller General on the adequacy of information security policies, procedures, and practices, and compliance with FISMA’s requirements. In addition, agency heads are required to annually report the results of their independent evaluations to OMB, except that to the extent an evaluation pertains to a national security system, only a summary and assessment of that portion of the evaluation is reported to OMB. OMB is also required to submit a report to the Congress no later than March 1 of each year on agency compliance with FISMA’s requirements, including a summary of findings of agencies’ independent evaluations. FISMA also requires the Comptroller General to periodically evaluate and report to Congress on (1) the adequacy and effectiveness of agency information security policies and practices and (2) implementation of FISMA requirements. Other major FISMA provisions require NIST to develop, for systems other than national security systems, (1) standards to be used by all agencies to categorize all their information and information systems based on the objectives of providing appropriate levels of information security according to a range of risk levels; (2) guidelines recommending the types of information and information systems to be included in each category; and (3) minimum information security requirements for information and information systems in each category. NIST must also develop a definition of and guidelines concerning detection and handling of information security incidents; and guidelines, developed in conjunction with the Department of Defense and the National Security Agency, for identifying an information system as a national security system. The law also assigned other information security functions to NIST, including providing technical assistance to agencies on such elements as compliance with the standards and guidelines and the detection and handling of information security incidents; conducting research, as needed, to determine the nature and extent of information security vulnerabilities and techniques for providing cost- effective information security; developing and periodically revising performance indicators and measures for agency information security policies and practices; evaluating private-sector information security policies and practices and commercially available information technologies to assess potential application by agencies; evaluating security policies and practices developed for national security systems to assess their potential application by agencies; and periodically assessing the effectiveness of and revising, as appropriate, the NIST standards and guidelines developed under FISMA. NIST is required to prepare an annual public report on activities undertaken in the previous year, and planned for the coming year, to carry out its responsibilities under FISMA. On August 6, 2003, OMB issued its fiscal year 2003 FISMA reporting instructions and guidance on quarterly IT security reporting. These instructions, which required agencies to submit their reports to OMB by September 22, 2003, essentially continued many of the reporting requirements established for GISRA, including performance measures introduced for fiscal year 2002 reporting under that law. The instructions also highlighted the more substantive changes introduced by FISMA. For example, OMB emphasized that FISMA applies to both information and information systems used by an agency and by its contractors or other organizations and sources that possess or use federal information or that operate, use, or have access to federal information systems. OMB also underscored that FISMA requires each agency to test and evaluate the effectiveness of the information security policies, procedures, and practices for each system at least annually. OMB’s fiscal year 2003 reporting instructions also emphasized the strong focus on performance measures and formatted these instructions to emphasize a quantitative rather than a narrative response. OMB also required agencies to provide quarterly updates for a key subset of these performance measures, with the first update due December 15, 2003. Measures within this key subset are the numbers of systems that have risk assessments and assigned levels of risk, up-to-date IT security plans, security control costs integrated into their life cycles, security controls tested and evaluated in the last year, contingency plans tested. Further, OMB provided instructions for continued agency reporting on the status of remediation efforts through plans of action and milestones (POA&M). Required for all programs and systems where an IT security weakness has been found, a POA&M lists the weaknesses and shows estimated resource needs or other challenges to resolving them, key milestones and completion dates, and the status of corrective actions. POA&Ms are to be submitted twice a year. In addition, agencies are to submit quarterly updates that show the number of weaknesses for which corrective action was completed on time (including testing), is ongoing and on track to be completed as originally scheduled, or has been delayed; as well as the number new weaknesses discovered since that last update. Consistent with last year, OMB’s fiscal year 2003 guidance continued to authorize agencies to release certain information from their POA&Ms to assist the Congress in its oversight responsibilities. Agencies could release this information, as requested, excluding certain elements, such as estimated funding resources and the scheduled completion dates for resolving a weakness. Lastly, as part of IG FISMA reporting, OMB instructed the IGs to respond to essentially the same questions that the agencies were to respond to in their reports. The IG responses were to be based on the results of their independent evaluations, including agency progress in implementing and maintaining their POA&Ms, and any other work performed throughout the reporting period (such as financial statement or other audits). This year, OMB also asked the IGs to assess against specific criteria whether the agency had developed, implemented, and was managing an agencywide POA&M process. OMB noted that this assessment was critical because effective remediation of IT security weaknesses is essential to achieving a mature and sound IT security program and securing information and systems. Further, OMB identified this IG assessment as one of the criteria used in evaluating agencies under the Expanding E-Government Scorecard of the President’s Management Agenda. OMB also instructed the IGs to use the performance measures to assist in evaluating agency officials’ performance. However, it did not request them to validate agency responses to the performance measures. Instead, as part of their independent evaluations of a subset of agency systems, IGs were to assess the reliability of the data for those systems that they evaluated. In its FY 2003 Report to Congress on Federal Government Information Security Management, published this month, OMB concludes that the federal government has made significant strides in identifying and addressing long-standing problems, but that challenging weaknesses remain. Overall, the report discusses the steps taken by OMB and federal agencies to implement FISMA, details progress made in fiscal year 2003, and identifies IT security gaps and weaknesses. The report also presents a plan of action that OMB is pursuing with agencies to close these gaps and improve the security of federal information and systems. This plan is intended to resolve information and security challenges through both management and budgetary processes. OMB’s report discussed four governmentwide findings: 1. Agencies’ Progress Against Governmentwide IT Security Milestones. The President’s fiscal year 2004 budget established three governmentwide goals to be met by the end of calendar year 2003. These goals and the progress reported against them were: Goal 1 — As required by FISMA, all federal agencies are to have created a central remediation process to ensure that program and system-level IT security weaknesses, once identified, are tracked and corrected. In addition, each agency IG is to verify whether the agency has a process in place that meets criteria specified in OMB guidance. Based on IG responses to these criteria, OMB reported that each agency has an IT security remediation process, but that the maturity of these processes varies greatly. In particular, the report noted that for the 24 large agencies, only half have a remediation process verified by their IGs as meeting the necessary criteria. Goal 2 — Eighty percent of federal IT systems are to be certified and accredited. OMB reported that many agencies are not adequately prioritizing their IT investments to ensure that significant IT security weaknesses are appropriately addressed. As a result, at the end of 2003, the reported percentage of systems certified and accredited had increased to 62 percent, but was still short of the goal. Related to this goal, the report noted that most security weaknesses can be found in operational systems that either have never been certified and accredited or whose certification and accreditation are out of date. Goal 3 — Eighty percent of the federal government’s fiscal year 2004 major IT investments shall appropriately integrate security into the lifecycle of the investment. OMB reported that agencies have made improvements in integrating security into new IT investments, but that significant problems remain, particularly in ensuring security of existing systems. As an example, the report provided results for the performance measure related to this goal, which showed that at the end of 2003, the percentage of systems that had integrated security into the lifecycle of the investment increased to 78 percent. 2. Agency Progress Against Key IT Security Measures. As the report highlights, because of GISRA and the OMB-developed performance measures, the federal government is now able to measure progress in IT security; and the Congress, OMB, the agencies, and GAO are able to track and monitor agency efforts against those measures. Noting agency progress, the report provides a table comparing results of 24 large federal agencies for key performance measures for fiscal years 2001, 2002, and 2003. However, it also notes that further work is needed, and uses the area of contingency planning as an example, where only 48 percent of the systems had tested contingency plans. A comparison of reported overall results for fiscal year 2002 and 2003 is provided below in table 1. 3. IGs’ Assessment of Agency Plan of Action and Milestones Process. As mentioned in the discussion of goal 1, OMB requested that IGs assess against a set of criteria whether the agency had a robust agencywide plan of action process. OMB reported the overall results of this assessment for the 24 agencies, which showed that 8 had such a process; 4 did, but with improvements needed; 11 did not; and one did not submit a report (DOD). 4. Lack of Clear Accountability for Ensuring Security of Information and Systems. The report emphasizes that even with the strong focus of both GISRA and FISMA on the responsibilities of agency officials regarding security, there continues to be a lack of understanding, and therefore, accountability within the federal government. Issues that continue to be a concern include the following: Agency and IG reports continue to identify the same IT security weaknesses year after year, some of which are seen as repeating material weaknesses. Too many legacy systems continue to operate with serious weaknesses. As a result, there continues to be a failure to adequately prioritize IT funding decisions to ensure that remediation of significant security weaknesses are funded prior to proceeding with new development. In further discussing this finding, the report concludes that these concerns must be addressed through improved accountability, that is, holding agency program officials accountable for ensuring that the systems that support their programs and operations are secure. Further, it emphasizes that ensuring the security of an agency’s information and systems is not the responsibility of a single agency official or the agency’s IT security office, but rather a responsibility to be shared among agency officials that support their operations and assets. The report also outlines a plan of action to improve performance that identifies specific steps it will pursue to assist agencies in their IT security activities, promote implementation of law and policy, and track status and progress. These steps are: Prioritizing IT Spending to Resolve IT Security Weaknesses. OMB reports that it used information from agencies’ annual FISMA reports and quarterly POA&M updates in making funding decisions for fiscal year 2004, as well as for fiscal year 2005 to address longer term security weaknesses. For example, agencies with significant information and system security weaknesses were directed to remediate operational systems with weaknesses prior to spending fiscal year 2004 IT development or modernization funds. Further, if additional resources are needed to resolve those weaknesses, agencies are to use those fiscal year 2004 funds originally sought for new development. President’s Management Agenda Scorecard. To “get to green” under the Expanding E-Government Scorecard for IT security, agencies are required to meet the following three criteria: (1) demonstrate consistent progress in remediating IT security weaknesses; (2) attain certification and accreditations for 90 percent of their operational IT systems; and (3) have an IG-assessed and IG-verified agency POA&M process. Fiscal Year 2004 OMB FISMA Guidance. OMB plans to further emphasize performance measurement in next year’s guidance. In particular, its focus will center on three areas: (1) evolving the IT security performance measures to move beyond status reporting to also identify the quality of the work done, such as determining both the number of systems certified and accredited and the quality of certification and accreditation conducted; (2) further targeting of IG efforts to assess the development, implementation, and performance of key IT security processes, such as remediation and intrusion detection and reporting; and (3) providing additional clarity to certain definitions to eliminate interpretation differences within agencies and among agencies and IGs. Threat and Vulnerability Response Process. In response to the increasing number and potential impact of threats and vulnerabilities, OMB will continue to focus on improving the federal government’s incident prevention and management capabilities. Such improvements include an increased emphasis on reducing the impact of worms and viruses through more timely installation of patches for known vulnerabilities, and improved information sharing to rapidly identify and respond to cyber threats and critical vulnerabilities. OMB also notes the critical importance of agency business continuity plans to mitigating the impact of threats and vulnerabilities. Finally, OMB’s March 2004 report to the Congress identifies several other issues, and provides additional summary and agency-specific information. These include the following: As one of the changes or additions introduced by FISMA, a stronger emphasis is placed on configuration management. Specifically, FISMA requires each agency to develop specific system configuration requirements that meet its own needs and ensure compliance with them. According to the report, this provision encompasses traditional system configuration management, employing clearly defined system security settings, and maintaining up-to-date patches. Further, adequate ongoing monitoring and maintenance must accompany the establishment of such configuration requirements. Federal funding for IT security increased from $2.7 billion in fiscal year 2002 to $4.2 billion in fiscal year 2003. The report also continues to emphasize that, historically, a review of IT security spending and security results has demonstrated that spending is not a statistically significant factor in determining agency security performance. Rather, the key is effectively incorporating IT security in agency management actions and implementing IT security throughout the lifecycle of a system. The report appendixes provide an overview of the federal government’s IT security program, a summary of performance by 55 small and independent agencies, and individual summaries for each of the 24 large agencies. Overall, fiscal year 2003 data reported by the agencies for a subset of OMB’s performance measures show increasing numbers of systems meeting the requirements represented by these measures. For example, as shown in table 1, the reported percentage of systems authorized for processing following certification and accreditation increased from 47 percent for fiscal year 2002 to 62 percent for fiscal year 2003—an increase of 15 percentage points. In addition, the reported number of systems assessed for risk and assigned a level of risk increased by 13 percentage points from 65 percent for fiscal year 2002 to 78 percent for fiscal year 2003. Reported increases for other measures ranged from 4 to 15 percentage points. Figure 1 illustrates the reported overall status of the 24 agencies in meeting these requirements and the increases between fiscal years 2002 and 2003. This subset of performance measures highlights important information security requirements. However, agencies’ FISMA reports also address other specific statutory requirements, regarding such elements as incident response capabilities, information security training, review of agency contractor operations and facilities, and remediation processes. The agency reports, as well as the IGs independent evaluations are intended to address all the FISMA requirements, and it is these reports and evaluations that your subcommittee reviewed in assigning agency grades for your December 2003 computer security report card. The data and other information submitted for fiscal year 2003 FISMA reporting did show overall increases by many agencies for certain measures, but also that wide variances existed among the agencies. As discussed earlier, we did not validate the accuracy of the data reported by the agencies, but did analyze the IGs’ fiscal year 2003 FISMA reports to identify issues related to the accuracy of this information. Also as discussed later, we noted opportunities to improve the usefulness of agency-reported data. Further, in considering FISMA data, it is important to note that as more systems are subject to the certification and accreditation process and periodically tested, it is probable that additional significant weaknesses will be identified; and until all systems have contingency plans that are periodically tested, agencies have limited assurance that they will be able to recover from unexpected events. Summaries of results reported for specific requirements follow. As part of the agencywide information security program required for each agency, FISMA mandates that agencies assess the risk and magnitude of the harm that could result from the unauthorized access, use, disclosure, disruption, modification, or destruction of their information and information systems. OMB, through information security policy set forth in its Circular A-130, also requires an assessment of risk as part of a risk- based approach to determining adequate, cost-effective security for a system. As defined in NIST’s current draft revision of its Risk Management Guide for Information Technology Systems, risk management is the process of identifying risk, assessing risk, and taking steps to reduce risk to an acceptable level where risk is defined as the net negative impact of the exercise of vulnerability, considering both the probability and the impact of occurrence. Risk assessment is the first process in the risk management process, and organizations use risk assessment to determine the extent of the potential threat and the risk associated with an IT system throughout its systems development life cycle. Our best practices work has also shown that risk assessments are an essential element of risk management and overall security program management, and are an integral part of the management processes of leading organizations. Risk assessments help ensure that the greatest risks have been identified and addressed, increase the understanding of risk, and provide support for needed controls. To measure agencies’ performance in implementing this requirement, OMB mandates that agencies’ FISMA reports provide the number and percentage of systems that have been assessed for risk. Reporting for this measure continued to show overall increases. Specifically, 14 of the 24 agencies reported an increase in the percentage of systems assessed for risk for fiscal year 2003 as compared with fiscal year 2002. Further, as illustrated in figure 2, 12 agencies reported that they had assessed risk for 90 to 100 percent of their systems for fiscal year 2003, and only 4 of the remaining 13 agencies reported that less than half of their systems had been assessed for risk (compared with 8 agencies for fiscal year 2002). FISMA requires that agencywide information security programs include subordinate plans for providing adequate information security for networks, facilities, and systems or groups of information systems, as appropriate. According to NIST security plan guidance, the purpose of these plans is to (1) provide an overview of the security requirements of the system and describe the controls in place or planned for meeting those requirements, and (2) delineate the responsibilities and expected behavior of all individuals who access the system. OMB Circular A-130 requires that agencies prepare IT system security plans consistent with NIST guidance, and that these plans contain specific elements, including rules of behavior for system use, required training in security responsibilities, personnel controls, technical security techniques and controls, continuity of operations, incident response, and system interconnection. Agencies are also to update security plans as part of the cycle for reaccrediting system processing. As a performance measure for this requirement, OMB requires that agencies report number and percentage of systems with up-to-date security plans. Agency data reported for this measure showed overall increases for fiscal year 2003, with a total of 9 agencies reporting up-to- date security plans for 90 percent or more of their systems compared with 7 agencies for fiscal year 2002. Further, of the remaining 15 agencies, only 5 reported that less than 50 percent of their systems had up-to-date security plans, compared with 9 agencies in 2002. Figure 3 summarizes overall fiscal year 2003 results. As part of its responsibilities under FISMA, OMB is required to develop and oversee the implementation of policies, principles, standards, and guidelines on information security. Included in OMB’s policy for federal information security is a requirement that agency management officials formally authorize their information systems to process information and, thereby, accept the risk associated with their operation. This management authorization (accreditation) is to be supported by a formal technical evaluation (certification) of the management, operational, and technical controls established in an information system’s security plan. NIST is currently in the process of updating its guidance for the certification and accreditation of federal systems (except for national security systems). This guidance is to be used in conjunction with other standards and guidance that FISMA requires NIST to issue—documents that, when completed, are intended to provide a structured yet flexible framework for identifying, employing, and evaluating the security controls in federal information systems. Because OMB considers system certification and accreditation to be such an important information security quality control, for FISMA reporting, it requires agencies to report the number of systems authorized for processing after certification and accreditation. Data reported for this measure showed overall increases for most agencies. For example, 17 agencies reported increases in the percentage of systems authorized compared with their percentages last year. In addition, 7 agencies reported that they had authorized 90 to 100 percent of their systems compared with only 3 agencies last year. However, 11 agencies reported they had authorized less than 50 percent of their systems, but this also indicated some improvement compared with the 13 agencies that reported less than 50 percent last year (which included 3 that reported none). Figure 4 summarizes overall results for the 24 agencies for fiscal year 2003. The results of the IGs’ independent evaluations showed deficiencies in agencies’ system certifications and accreditations, including instances in which certifications and accreditations were not were not current and controls were not tested. In addition, at the request of the House Committee on Government Reform and your subcommittee, we are currently reviewing federal agencies’ certification and accreditation processes. Preliminary results of our work indicate that the majority of the 24 large agencies reported that they are using NIST or other prescribed guidance for their system certifications and accreditations. However, our reviews of the certification and accreditation of selected systems at selected agencies identified instances where documentation did not show that specific criteria were always met. For example, we noted instances in which systems were accredited even though risk assessments were outdated, contingency plans were incomplete or untested, and control testing was not performed. Further, in some cases, documentation did not clearly indicate what residual risk the accrediting official was actually accepting in making the authorization decision. Unless agencies ensure that their certifications and accreditations meet appropriate criteria, the value of this process as a management control for ensuring information system security is limited, and agency reported performance data may not accurately reflect the status of an agency’s efforts to implement this requirement. OMB requires that agencies’ budget submissions specifically identify security costs as part of life-cycle costs for their IT investments and has provided criteria to be considered in determining such costs. OMB also provided these security cost criteria in its FISMA guidance and required agencies to report their IT security spending, including those critical infrastructure protection costs that apply to the protection of government operations and assets. Among other questions related to including security costs in IT investments, OMB requires that the agencies report the number of systems that have security control costs integrated into their system life cycles. Fiscal year 2003 reporting for this measure showed that agencies are increasingly integrating security control costs into the life cycle of their systems. Specifically, 15 agencies reported increases in the number of systems integrating security costs, compared with the number reported last year. Also, as shown in figure 5, 9 agencies reported meeting this measure for 90 to 100 percent of their systems. FISMA requires that agency information security programs include periodic testing and evaluation of the effectiveness of information security policies, procedures, and practices, to be performed with a frequency that depends on risk, but no less than annually. This is to include testing of management, operational, and technical controls of every information system identified in the FISMA-required inventory of major systems. Periodically evaluating the effectiveness of security policies and controls and acting to address any identified weaknesses are fundamental activities that allow an organization to manage its information security risks cost- effectively, rather than reacting to individual problems ad hoc only after a violation has been detected or an audit finding has been reported. Further, management control testing and evaluation as part of program reviews is an additional source of information that can be considered along with control testing and evaluation in IG and our audits to help provide a more complete picture of the agencies’ security postures. As a performance measure for this requirement, OMB mandates that agencies report the number of systems for which security controls have been tested and evaluated. Fiscal year 2003 data reported for this measure showed that a total of 15 agencies reported an increase in the overall percentage of systems being tested and evaluated. However, 8 agencies still reported that they had tested the controls of less than 50 percent of their systems (compared with 10 agencies last year) and only 6 of the remaining 16 agencies reported testing and evaluating the controls for 90 percent or more of their systems (compared with 4 agencies last year). Figure 6 shows the overall results for fiscal year 2003. FISMA requires that agencies’ information security programs include plans and procedures to ensure continuity of operations for information systems that support the operations and assets of the agency. Contingency plans provide specific instructions for restoring critical systems, including such elements as arrangements for alternative processing facilities, in case usual facilities are significantly damaged or cannot be accessed due to unexpected events such as temporary power failure, accidental loss of files, or major disaster. It is important that these plans be clearly documented, communicated to affected staff, and updated to reflect current operations. The testing of contingency plans is essential to determine whether they will function as intended in an emergency situation, and the frequency of plan testing will vary depending on the criticality of the entity’s operations. The most useful tests involve simulating a disaster situation to test overall service continuity. Such a test would include testing whether the alternative data processing site will function as intended and whether critical computer data and programs recovered from off-site storage are accessible and current. In executing the plan, managers will be able to identify weaknesses and make changes accordingly. Moreover, tests will assess how well employees have been trained to carry out their roles and responsibilities in a disaster situation. To show the status of implementing this requirement, OMB mandates that agencies report the number of systems that have a contingency plan and the number with contingency plans that have been tested. Agencies’ reported fiscal year 2003 data for these measures showed that contingency planning remains a problem area for many agencies. Specifically, a total of 11 agencies report that less than half of their systems have contingency plans and of the remaining 13 agencies, only 6 have contingency plans for 90 to 100 percent of their systems. In addition, a total of 14 agencies reported that they had tested contingency plans for less than half of their systems, including 2 agencies that reported testing none. Figure 7 provides overall results for fiscal year 2003 contingency plan testing. FISMA requires agencies to provide security awareness training to inform personnel, including contractors and other users of information systems that support the operations and assets of the agency, of information security risks associated with their activities, and their responsibilities in complying with agency policies and procedures designed to reduce these risks. In addition, agencies are required to provide training on information security to personnel with significant security responsibilities. Our studies of best practices at leading organizations have shown that such organizations took steps to ensure that personnel involved in various aspects of their information security programs had the skills and knowledge they needed. They also recognized that staff expertise had to be frequently updated to keep abreast of ongoing changes in threats, vulnerabilities, software, security techniques, and security monitoring tools. As performance measures for FISMA training requirements, OMB has the agencies report the number of employees who received IT security training during fiscal year 2003 and the number of employees with significant security responsibilities who received specialized training. Reported fiscal year 2003 data showed that 13 agencies reported that they provided security training to 90 to 100 percent of their employees and contractors compared with 9 agencies for fiscal year 2002. Of the remaining 11 agencies, only 3 reported that such training was provided for less than half of their employees/contractors, and 1 provided insufficient data for this measure. For specialized training for employees with significant security responsibilities, reported data showed increases since fiscal year 2002. For example, a total of 7 agencies reported training for 90 to 100 percent of their employees with significant security responsibilities (compared with 5 agencies last year), and of the remaining 17 agencies, only 2 reported providing training to less than half of such employees (compared with 10 for fiscal year 2002). Figure 8 provides overall results for fiscal year 2003. Although even strong controls may not block all intrusions and misuse, organizations can reduce the risks associated with such events if they promptly take steps to detect them before significant damage can be done. Accounting for and analyzing security problems and incidents are also effective ways for an organization to gain a better understanding of threats to its information and of the cost of its security-related problems. Such analyses can also pinpoint vulnerabilities that need to be addressed to help ensure that they will not be exploited again. Problem and incident reports can, therefore, provide valuable input for risk assessments, help in prioritizing security improvement, and be used to illustrate risks and related trends in reports to senior management. FISMA requires that agencies’ information security programs include procedures for detecting, reporting, and responding to security incidents; mitigating risks associated with such incidents before substantial damage is done; and notifying and consulting with the FISMA-required federal information security incident center and other entities, as appropriate, including law enforcement agencies and relevant IGs. OMB information security policy has also required that system security plans ensure a capability to provide help to users when a security incident occurs in the system and to share information concerning common vulnerabilities and threats. In addition, NIST has provided guidance to assist organizations in establishing computer security incident-response capabilities and in handling incidents efficiently and effectively. OMB requires agencies to report several performance measures and other information for FISMA related to detecting, reporting, and responding to security incidents. These include the number of agency components with an incident handling and response capability, whether the agency and its major components share incident information with the Federal Computer Incident Response Center (FedCIRC) in a timely manner, and the numbers of incidents reported. OMB also requires that agencies report on how they confirm that patches have been tested and installed in a timely manner and whether they are a member of FedCIRC’s Patch Authentication and Distribution Capability, which provides agencies with information on trusted, authenticated patches for their specific technologies without charge. Agency-reported data showed that many agencies have established and implemented incident-response capabilities for their components. For example, 17 agencies reported that for fiscal year 2003, 90 percent or more of their components had incident handling and response capabilities (compared to 12 agencies for fiscal year 2002). Also, a total of 18 agencies reported that their components report incidents to FedCIRC either themselves or centrally through one group. A total of 22 agencies reported that they confirm patches have been tested and installed in a timely manner. In contrast, of the 23 IGs that reported, 11 responded that the agency confirmed that patches have been tested and installed in a timely manner; 5 that the agency did but not consistently; and 6 that the agency did not (1 other IG did not provide sufficient data). A total of 19 agencies also reported that they were a member of FedCIRC’s Patch Authentication and Distribution Capability. In our September 2003 testimony, we discussed the criticality of the patch management process in helping to alleviate many of the challenges involved in securing computing systems from attack. We also identified common practices for effective patch management found in security- related literature from several groups, including NIST, Microsoft, patch management software vendors, and other computer-security experts. These practices included senior executive support of the process; standardized patch management policies, procedures, and tools; dedicated resources and clearly assigned responsibilities for ensuring that the patch management process is effective; current inventory of all hardware equipment, software packages, services, and other technologies installed and used by the organization; proactive identification of relevant vulnerabilities and patches; assessment of the risk of applying the patch considering the importance of the system to operations, the criticality of the vulnerability, and the likelihood that the patch will disrupt the system; testing each individual patch against various systems configurations in a test environment before installing it enterprisewide to determine any impact on the network; effective patch distribution to all users; and regular monitoring through network and host vulnerability scanning to assess whether patches have been effectively applied. In addition to these practices, we identified several steps to be considered when addressing software vulnerabilities, including: deploying other technologies, such as antivirus software, firewalls, and other network security tools, to provide additional defenses against attacks; employing more rigorous engineering practices in designing, implementing, and testing software products to reduce the number of potential vulnerabilities; improving tools to more effectively and efficiently manage patching; researching and developing technologies to prevent, detect, and recover from attacks as well as to identify their perpetrators, such as more sophisticated firewalls to keep serious attackers out, better intrusion- detection systems that can distinguish serious attacks from nuisance probes and scans, systems that can isolate compromised areas and reconfigure while continuing to operate, and techniques to identify individuals responsible for specific incidents; and ensuring effective, tested contingency planning processes and procedures. Under FISMA, agency heads are responsible for providing information security protections for information collected or maintained by or on behalf of the agency and information systems used or operated by an agency or by a contractor. Thus, as OMB emphasized in its fiscal year 2003 FISMA reporting guidance, agency IT security programs apply to all organizations that possess or use federal information or that operate, use, or have access to federal information systems on behalf of a federal agency. Such other organizations may include contractors, grantees, state and local governments, and industry partners. This underscores longstanding OMB policy concerning sharing government information and interconnecting systems: federal security requirements continue to apply and the agency is responsible for ensuring appropriate security controls. As a performance measure for the security of contractor-provided security, OMB had the agencies report the number of contractor facilities or operations reviewed and to respond as to whether or not they used appropriate methods (such as audits or inspections and agreed-upon IT security requirements) to ensure that contractor-provided services for their programs and systems are adequately secure and meet the requirements of FISMA, OMB policy and NIST guidelines, national security policy, and agency policy. Fiscal year 2003 data reported for these measures showed that 10 of the 24 agencies reported that they had reviewed 90 to 100 percent of their contractor operations or facilities. Only 2 agencies reported having reviewed less than half of their contractor operations or facilities, and two others provided insufficient data for this measure. In addition, 22 agencies reported that they used appropriate methods to ensure that contractor- provided services are adequately secure and meet the requirements of FISMA. Of the remaining two agencies, one reported that it did not use appropriate methods and one reported partial compliance. Although these reported results indicate overall increases from fiscal year 2002, the IGs’ evaluations provided different results. For example, although the IG evaluations did not always address these measures, 9 of the 15 IGs that did report showed that less than half of contractor operations or facilities were reviewed. Further, only 12 IGs reported that the agency used appropriate methods to ensure that contractor-provided services are adequately secure and meet the requirements of FISMA, while 7 reported that their agencies did not. FISMA requires that agencies’ information security programs include a process for planning, implementing, evaluating, and documenting remedial action to address any deficiencies in the information security policies, procedures, and practices of the agency. Developing effective corrective action plans is key to ensuring that remedial action is taken to address significant deficiencies. Further, a centralized process for monitoring and managing remedial actions enables the agency to identify trends, root causes, and entitywide solutions. As discussed previously, as part of GISRA implementation, OMB began requiring that agencies report on the status of their remediation efforts through POA&Ms and quarterly updates. In addition, for fiscal year 2003 FISMA reporting, OMB had agency IGs assess whether the agency had developed, implemented, and was managing an agencywide plan of action and milestone process according to specific criteria, such as whether agency program officials and the CIO develop, implement, and manage POA&Ms for every system that they own and operate (systems that support their programs) that has an IT security weakness; and whether the agency CIO centrally tracks and maintains all POA&M activities on at least a quarterly basis. Overall, the IGs’ responses to these criteria showed that many agencies still do not use the POA&M process to manage the correction of their information security weaknesses. For example, as part of monitoring the status corrective actions, 20 of the 23 IGs that reported responded that the agency CIO tracked POA&M data centrally on at least a quarterly basis, but only 12 reported that the CIO maintained POA&Ms for every system that has an IT weakness. Further, 14 IGs reported that their agency POA&M process did not prioritize IT security weaknesses to ensure that significant weaknesses are addressed in a timely manner and receive appropriate resources. Reported IG responses to these and other criteria are summarized in table 2. Periodic reporting of performance measures tied to FISMA requirements and related analysis can provide valuable information on the status and progress of agency efforts to implement effective security management programs, thereby assisting agency management, OMB and the Congress in their management and oversight roles. However, several opportunities exist to improve the usefulness of such information as indicators of both governmentwide and agency-specific performance in implementing information security requirements. As discussed earlier, OMB plans to further emphasize performance measurement in next year’s FISMA reporting guidance, including evolving measures to identify the quality of work performed, targeting IG efforts to assess key security processes, and clarifying certain definitions. In developing its guidance, OMB can consider how their efforts can help to address the following factors that lessen the usefulness of current performance measurement data: Limited assurance of data reliability and quality. The performance measures reported by the agencies are primarily based on self- assessments and are not independently validated. OMB did not require the IGs to validate agency responses to the performance measures, but did instruct them to assess the reliability of the data for the subset of systems they evaluate as part of their independent evaluations. Although not consistently addressed by all the IGs, some IG evaluations did identify problems with data reliability and quality that could affect agency performance data. For example, for the performance measure on the number of agency systems authorized for processing after certification and accreditation, 6 IGs indicated different results than those reported by their agencies for reasons such as out-of-date certifications and accreditations (systems are to be reaccredited at least every 3 years). Further, other IGs identified problems with the quality of the certifications and accreditations, such as security control reviews not being performed. Accuracy of agency system inventories. The total number of agency systems is a key element in OMB’s performance measures, in that agency progress is indicated by the percentage of total systems that meet specific information security requirements. Thus, inaccurate or incomplete data on the total number of agency systems affects the percentage of systems shown as meeting the requirements. Further, a complete inventory of major information systems is a key element of managing the agency’s IT resources, including the security of those resources. As mentioned, FISMA requires that each agency develop, maintain, and annually update an inventory of major information systems operated by the agency or under its control. However, according to their fiscal year 2003 FISMA reports, only 13 of the 24 agencies reported that they had completed their system inventories. Further, independent evaluations by IGs for 3 of these 13 agencies did not agree that system inventories were complete. In addition, although there was little change in the reported total number of systems shown for the 24 agencies (an increase of only 41 systems from 7,957 systems for fiscal year 2002 to 7,998 systems for fiscal year 2003, large changes in individual agencies’ total systems from year to year could make it more difficult to interpret changes in their performance measure results. For example, the total number of systems reported by the Department of Agriculture decreased by 55 percent from 605 for fiscal year 2002 to 271 for fiscal year 2003, which the department attributed, in large part, to its efforts to develop the FISMA-required inventory of major information systems. At the same time, all of the department’s key performance measures increased, with some, such as systems assessed for risk, showing a large increase (from 18 percent for fiscal year 2002 to 72 percent for fiscal year 2003). Limited Department of Defense data. In interpreting overall results for the federal government, it is important to note that reported numbers include only a small sample of the thousands of systems identified by DOD. Attributing its size and complexity and the considerable lead time necessary to allow for the collection of specific metrics and the approval process by each service and agency, DOD determined that the collection of a sample of system and network performance metrics would effectively support its emphasis on network-centric operations and complement its overall information assurance security reporting. Obtaining OMB concurrence with this approach, DOD provided performance measurement data on a sample of 378 systems in its fiscal year 2003 FISMA report. As OMB reported in its fiscal year 2003 report to the Congress, DOD reported a total of 3,557 systems for the department— almost half of the combined total systems for the other 23 agencies. OMB also reported that DOD plans to report on all systems for the fiscal year 2004 reporting cycle. As a result, including performance data on all DOD systems for fiscal year 2004 could significantly affect the overall performance measurement results both for DOD and governmentwide. Data reported in aggregate, not according to system risk. Performance measurement data are reported on the total number of agency systems and do not indicate the relative importance or risk of the systems for which FISMA requirements have been met. Reporting information by system risk would provide better information about whether agencies are prioritizing their information security efforts according to risk. For example, the performance measures for fiscal year 2003 show that 48 percent of the total number of systems have tested contingency plans, but do not indicate to what extent these 48 percent include the agencies’ most important systems. Therefore, agencies, the administration, and the Congress cannot be sure that critical federal operations can be restored if an unexpected event disrupts service. As required by FISMA, NIST recently issued its Standards for Security Categorization of Federal Information and Information Systems to provide a common framework and understanding for expressing security that promotes effective management and oversight of information security programs and consistent reporting to OMB and the Congress on the adequacy and effectiveness of information security policies, procedures, and practices. These standards, which are discussed later in greater detail, would require agencies to categorize their information systems according to three levels of potential impact on organizations or individuals—high, moderate, and low—should there be a breach of security. Refinement of performance measures to improve quality of analysis. Refinement of performance measures can provide more useful information about the quality of agency processes. For example, as discussed earlier, GAO and the IGs have noted issues concerning the quality of the certification and accreditation process. Additional information reported on key aspects of certification and accreditation would provide better information to assess whether they were performed consistently. As also discussed earlier, OMB’s fiscal year 2003 FISMA report to the Congress also identified the need to evolve performance measures to provide better quality information. Since FISMA was enacted in December 2002, NIST has taken a number of actions to develop required security-related standards and guidance. These actions include the following: In December 2003 it issued the final version of its Standards for Security Categorization of Federal Information and Information Systems (FIPS Publication 199). NIST was required to submit these categorization standards to the Secretary of Commerce for promulgation no later than 12 months after FISMA was enacted. The standards establish three levels of potential impact on organizational operations, assets, or individuals should a breach of security occur—high (severe or catastrophic), moderate (serious), and low (limited). These standards are intended to provide a common framework and understanding for expressing security that promotes effective management and oversight of information security programs, and consistent reporting to OMB and the Congress on the adequacy and effectiveness of information security policies, procedures, and practices. Also in December 2003, it issued the initial public draft of its Guide for Mapping Types of Information and Information Systems to Security Categories (Special Publication 800-60). Required to be issued 18 months after FISMA enactment, this guidance is to assist agencies in categorizing information and information systems according to impact levels for confidentiality, integrity, and availability as provided in NIST’s security categorization standards (FIPS Publication 199). In October 2003 it issued an initial public draft of Recommended Security Controls for Federal Information Systems (Special Publication 800-53) to provide guidelines for selecting and specifying security controls for information systems categorized in accordance with FIPS Publication 199. This draft includes baseline security controls for low and moderate impact information systems, with controls for high impact systems to be provided in subsequent drafts. This publication, when completed, will serve as interim guidance until 2005 (36 months after FISMA enactment), which is the statutory deadline to publish minimum standards for all non-national- security systems. In addition, testing and evaluation procedures used to verify the effectiveness of security controls are to be provided this spring in NIST’s Guide for Verifying the Effectiveness of Security Controls in Federal Information Systems (Special Publication 800-53A). In August 2003 it issued Guideline for Identifying an Information System as a National Security System (Special Publication 800-59). This document provides guidelines developed in conjunction with DOD, including the National Security Agency, to ensure that agencies receive consistent guidance on the identification of systems that should be governed by national security system requirements. Except for national security systems identified by FISMA, the Secretary of Commerce is responsible for prescribing standards and guidelines developed by NIST. DOD and the Director of Central Intelligence have authority to develop policies, guidelines, and standards for national security systems. The Director is also responsible for policies relating to systems processing intelligence information. According to a NIST official, the agency has also made progress in implementing other FISMA requirements. For example, it is continuing to provide consultative services to agencies on FISMA related information security issues and has established a federal agencies security practices Web site to identify, evaluate, and disseminate best practices for critical infrastructure protection and security. In addition, it has established a Web site for the private sector to share nonfederal information security practices. NIST has continued an ongoing dialogue with the National Security Agency and the Committee on National Security Systems to coordinate and take advantage of the security work underway within the federal government. FISMA also requires NIST to prepare an annual public report on activities undertaken in the previous year and planned for the coming year, to carry out its responsibilities. According to a NIST official, this report should be issued this month. In addition to its responsibilities under FISMA, NIST has issued or is developing other information security guidance that supports this law. Along with its guidance on incident handling, building an information security awareness program, and draft guidance on both certification and accreditation and risk management, NIST has also issued Security Metrics Guide for Information Technology Systems and Security Considerations in the Information System Development Life Cycle: Recommendations of the National Institute of Standards and Technology. Current budget constraints may, however, affect NIST’s future work. FISMA established new responsibilities for this agency and authorized an appropriation of $20 million for each fiscal year, 2003 through 2007. However, according to NIST, funding for the Computer Security Division, the organization responsible for FISMA activities, was reduced from last year, and this will affect this division’s information security and critical infrastructure protection work. In addition to the specific responsibilities to develop standards and guidance under FISMA, other information security activities undertaken by NIST include operating a computer security expert assist team (CSEAT) to assist federal agencies in identifying and resolving IT security problems; conducting security research in areas such as access control, wireless, mobile agents, smart-cards, and quantum computing; improving the security of control systems that manage key elements of the country’s critical infrastructure; and performing cyber security product certifications required for government procurements. The Cyber Security Research and Development Act also assigned information security responsibilities to NIST and authorized funding. These responsibilities include providing research grants to institutions of higher education or other research institutions to support short-term research aimed at improving the security of computer systems; growth of emerging technologies associated with the security of networked systems; strategies to improve the security of real-time computing and communications systems for use in process control; and multidisciplinary, long-term, high-risk research on ways to improve the security of computer systems. developing cyber security checklists (and establishing priorities for their development) that set forth settings and option selections that minimize the security risks associated with each computer hardware or software system that is, or is likely to become, widely used within the federal government. In summary, through the continued emphasis of information security by the Congress, the administration, agency management, and the audit community, the federal government has seen improvements in its information security. However, despite the apparent progress shown by increases in key performance measures, most agencies still have not reached the level of performance that demonstrates that they have implemented the agencywide information security program mandated by FISMA. If information security is to continue to improve, agency management must remain committed to these efforts and establish management processes that ensure that requirements are implemented for all their major systems, including new requirements to categorize their systems and incorporate mandatory minimum security controls. Performance measures will continue to be a key tool to both hold agencies accountable and provide a barometer of the overall status of federal information security. For this reason, it is increasingly important that agencies’ monitoring, review, and evaluation processes provide the Congress, the administration, and agency management with assurance that these measures accurately reflect agency progress. Opportunities to provide this assurance and improve the usefulness of agencies’ performance measurement data include IG validation of reported data, categorization of the data according to system risk levels, and refinement of the measures to provide more information about the quality of agency processes. Achieving significant and sustainable results will likely require agencies to develop programs and processes that prioritize and routinely monitor and manage their information security efforts. Further, agencies will need to ensure that systems and processes are in place to provide information and facilitate the day-to-day management of information security throughout the agency, as well as to verify the reliability of reported performance information. Mr. Chairman, this concludes my statement. I would be happy to answer any questions that you or members of the subcommittee may have at this time. If you should have any questions about this testimony, please contact me at (202) 512-3317 or Ben Ritt, Assistant Director, at (202) 512-6443. We can also be reached by e-mail at [email protected] and [email protected], respectively. Other individuals making key contributions to this testimony included Larry Crosland, Mark Fostek, Danielle Hollomon, and Barbarol James. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
For many years, GAO has reported on the widespread negative impact of poor information security within federal agencies and has identified it as a governmentwide high-risk issue since 1997. Legislation designed to improve information security was enacted in October 2000. It was strengthened in December 2002 by new legislation, the Federal Information Security Management Act of 2002 (FISMA), which incorporated important new requirements. This testimony discusses (1) the Office of Management and Budget's (OMB) recent report to the Congress required by FISMA on the government's overall information security posture, (2) the reported status of efforts by 24 of the largest agencies to implement federal information security requirements, (3) opportunities for improving the usefulness of performance measurement data, and (4) progress by the National Institute of Standards and Technology (NIST) to develop related standards and guidance. OMB reports significant strides in addressing long-standing problems, but at the same time cites challenging weaknesses that remain. One governmentwide weakness OMB emphasizes is a lack of understanding--and therefore accountability--on the part of agency officials regarding their responsibilities for ensuring the security of information and systems. The report presents a plan of action to close these gaps through both management and budgetary processes. Fiscal year 2003 FISMA data showed that, overall, the 24 federal agencies reported increasing numbers of their systems met the information security requirements represented by key OMB performance measures. For example, of the total number of systems reported by these agencies, the reported number assessed for risk climbed from 65 percent to 78 percent, those having a contingency plan jumped from 55 to 68 percent, and those authorized for processing following certification and accreditation rose from 47 to 62 percent. However, reported results varied widely among individual agencies, with some reporting that less than half of their systems met certain requirements. Further, GAO noted opportunities to improve the usefulness of reported performance management data, including independent validation of these data and completion of system inventories. Reported Performance Measurement Data for Selected Information Security Requirements for 24 Large Federal Agencies NIST made progress in developing security-related standards and guidance required by FISMA. These include standards to categorize systems according to potential impact in the event of a security breach and recommended controls for such systems. However, according to NIST, current and future funding constraints could threaten its information security work.
Today the Social Security program does not face an immediate crisis but rather a long-range and more fundamental financing problem driven largely by known demographic trends. The lack of an immediate solvency crisis changes the challenge, but it does not eliminate the need for action. Acting sooner rather than later would allow changes to be phased in so the individuals who are most likely to be affected, namely younger and future workers, will have time to adjust their retirement planning while helping to avoid related “expectation gaps.” It is also important to put the overall federal budget on a sustainable footing over the long term, thereby promoting higher economic growth and more fiscal flexibility to finance other priorities. Since there is a great deal of confusion about Social Security’s current financing arrangements and the nature of its long-term financing problem, I’d like to spend some time describing the nature, timing, and extent of the financing problem. Since Social Security will constitute claims on real resources in the future when it redeems assets to pay benefits, taking action now to increase the future pool of resources is important. As Federal Reserve Chairman Greenspan has said, the crucial issue of saving in our economy relates to our ability to build an adequate capital stock to produce enough goods and services in the future to accommodate both retirees and workers in the future. The most direct way we can raise national saving is by increasing government saving. Saving a good portion of the surpluses would allow the federal government to reduce the debt overhang from past deficit spending, provide a strong foundation for future economic growth and enhance future budgetary flexibility. Correspondingly, taking action now on Social Security not only would promote increased budgetary flexibility in the future and stronger economic growth but would also require less dramatic action than if we wait. Perhaps the best way to show this is to compare what it would take to achieve actuarial balance at different points in time. Figure 6 shows this. If we did nothing until 2038—the year the Trust Funds are estimated to be exhausted—achieving actuarial balance would require benefit reductions of 30 percent or a tax increase of 39 percent. As figure 6 shows, earlier action shrinks the size of the necessary adjustment. Thus both sustainability concerns and solvency considerations must drive us to act sooner rather than later. Trust Fund exhaustion may be more than 30 years away, but the squeeze on the federal budget is only 15 years in our future. Actions taken today can ease both these pressures and the pain of future actions. Acting sooner rather than later also provides a more reasonable planning horizon for future retirees. As important as financial stability may be for Social Security, it is not the only consideration. Social Security remains the foundation of the nation’s retirement system. Yet it is more than just a retirement program; it also pays benefits to disabled workers and their dependents, spouses and children of retired workers, and survivors of deceased workers. Last year, Social Security paid almost $408 billion in benefits to more than 45 million people. Since its inception, the program has successfully reduced poverty among the elderly. In 1959, 35 percent of the elderly were poor. In 1999, 8 percent of beneficiaries aged 65 or older were poor, and 48 percent would have been poor without Social Security. It is precisely because the program is so deeply woven into the fabric of our nation that any proposed reform must consider the program in its entirety, rather than one aspect alone. Thus, GAO has developed a broad framework for evaluating reform proposals that considers not only solvency but other aspects of the program as well. Arguably, similar frameworks can also be applied to other programs like Medicare. The analytic framework GAO has developed to assess proposals comprises three basic criteria: the extent to which a proposal achieves sustainable solvency and how it would affect the economy and the federal budget; the relative balance struck between the goals of individual equity and income adequacy; and how readily a proposal could be implemented, administered, and explained to the public. The weight that different policymakers may place on different criteria would vary, depending on how they value different attributes. For example, if offering individual choice and control is less important than maintaining replacement rates for low-income workers, then a reform proposal emphasizing adequacy considerations might be preferred. As they fashion a comprehensive proposal, however, policymakers will ultimately have to balance the relative importance they place on each of these criteria. Any reforms to Social Security must ensure that program revenues continue to exceed the cost of benefit payments if the Social Security program is to achieve sustainable solvency. Historically, the program’s solvency has generally been measured over a 75-year projection period. If projected revenues equal projected outlays over this time horizon, then the system is declared in actuarial balance. Unfortunately, this measure is itself unstable. Each year, the 75-year actuarial period changes, and a year with a surplus is replaced by a new 75th year that has a significant deficit. This means that changes that restore solvency only for the 75-year period will not hold. For example, if we were to raise payroll taxes by 1.86 percentage points of taxable payroll today—which, according to the 2001 Trustees Report, is the amount necessary to achieve 75-year balance—the system would be out of balance next year. Reforms that lead to sustainable solvency are those that avoid the automatic need to periodically revisit this issue. As I have already discussed, reducing the relative future burdens of Social Security and health programs is essential to a sustainable budget policy for the longer term. It is also critical if we are to avoid putting unsupportable financial pressures on future workers. Reforming Social Security and health programs is essential to reclaiming our future fiscal flexibility to address other national priorities. The current Social Security system’s benefit structure strikes a balance between the goals of retirement income adequacy and individual equity. From the beginning, benefits were set in a way that focused especially on replacing some portion of workers’ pre-retirement earnings, and over time other changes were made that were intended to enhance the program’s role in helping ensure adequate incomes. Retirement income adequacy, therefore, is addressed in part through the program’s progressive benefit structure, providing proportionately larger benefits to lower earners and certain household types, such as those with dependents. Individual equity refers to the relationship between contributions made and benefits received. This can be thought of as the rate of return on individual contributions. Balancing these seemingly conflicting objectives through the political process has resulted in the design of the current Social Security program and should still be taken into account in any proposed reforms. Policymakers could assess income adequacy, for example, by considering the extent to which proposals ensure benefit levels that are adequate to protect beneficiaries from poverty and ensure higher replacement rates for low-income workers. In addition, policymakers could consider the impact of proposed changes on various sub-populations, such as low-income workers, women, minorities, and people with disabilities. Policymakers could assess equity by considering the extent to which there are reasonable returns on contributions at a reasonable level of risk to the individual, improved intergenerational equity, and increased individual choice and control. Differences in how various proposals balance each of these goals will help determine which proposals will be acceptable to policymakers and the public. After I finish this brief overview of our evaluation framework, I would like to come back to this criterion and share some results from our recent report on income adequacy. Program complexity can both make implementation and administration more difficult, and make it harder to explain to the public. Some degree of implementation and administrative complexity arises in virtually all proposed reforms to Social Security, even those that make incremental changes in the already existing structure. However, the greatest potential implementation and administrative challenges are associated with proposals that would create individual accounts. These include, for example, issues concerning the management of the information and money flow needed to maintain such a system, the degree of choice and flexibility individuals would have over investment options and access to their accounts, investment education and transitional efforts, and the mechanisms that would be used to pay out benefits upon retirement. There is also the necessary and complex task of harmonizing any system of individual accounts with the extensive existing regulatory framework governing our nation’s private pension system. In evaluating such proposals, the complexities of meshing these systems would have to be balanced against the opportunity of extending pension participation to millions of uncovered workers. Continued public acceptance and confidence in the Social Security program require that any reforms and their implications for benefits be well understood. This means that the American people must understand what the reforms are, why they are needed, how they are to be implemented and administered, and how they will affect their own retirement income. All reform proposals will require some additional outreach to the public so that future beneficiaries can adjust their retirement planning accordingly. The more transparent the implementation and administration of reform, and the more carefully such reform is phased in, the more likely it will be understood and accepted by the American people. From a practical stand-point, the phase-in of any reform should reflect individual fairness and political feasibility. With regard to proposals that involve individual accounts, an essential challenge would be to help the American people understand the relationship between their individual accounts and traditional Social Security benefits, thereby ensuring that we avoid any gap in expectations about current or future benefits. Over the past few years, we have been developing a capacity at GAO to estimate the quantitative effects of Social Security reform on individuals. Such estimates speak directly to applying our adequacy/equity criterion to reform proposals. We have just issued a new report that includes such estimates to illustrate the varying effects of different policy scenarios on individuals. Today, I would like to share our findings regarding what measures can be used to examine income adequacy, defining appropriate benchmarks for assessing the future outlook for individuals’ Social Security benefits, and how varying approaches to reducing benefits could have different effects on adequacy. Our recent report did not, however, present estimates of effects on individual equity. In addition to these points, our report looked at how concern over income adequacy has shaped the Social Security program over the years and how income adequacy has changed over time, especially for different groups of beneficiaries. Various measures help examine different aspects of income adequacy, but no single measure can provide a complete picture. Three examples illustrate the variety of approaches. Dependency rates measure what proportion of the population depends on others for income support or, more specifically, on government income support programs such as Supplemental Security Income (SSI). Such rates reflect one of Social Security’s goals, reducing dependency on public assistance, which was articulated very early in the program’s history. Poverty rates measure what proportion of the population have incomes below the official poverty threshold, which is just one of many adequacy standards used in similar rate calculations. The poverty threshold provides a minimal standard of adequacy; other standards reflect different outlooks on what adequacy means. Earnings replacement rates measure the extent to which retirement income replaces pre-retirement income for particular individuals and thereby helps them maintain a pre-retirement standard of living. When applied to Social Security benefits, this measure reflects the way the benefit formula is designed to replace earnings. For any of these measures, the meaning of a given value of the measure is not clear. For example, what value of a dependency or poverty rate is considered low enough and what replacement rate is considered high enough are quite subjective. Moreover, all of these types of measures depend significantly on what types of income are counted, such as before- or after-tax income or noncash benefits such as Medicare and Medicaid. As a result, the measures are most useful not for their estimated values in isolation but rather for making comparisons, whether over time, across different subpopulations, or across different policy scenarios. In the past, we have pointed out the importance of establishing the proper benchmarks against which reforms must be measured. Often reform proposals are compared to currently promised benefits, but currently promised benefits are not fully financed. It is also necessary to use a benchmark of a fully financed system to fairly evaluate reform proposals. To illustrate a full range of possible outcomes, our recent report on income adequacy used hypothetical benchmark scenarios that would restore 75-year solvency either by only increasing payroll taxes or by only reducing benefits. Our tax-increase-only benchmark simulated benefits at currently promised levels while our benefit-reduction-only benchmarks simulated benefits funded at current tax levels. These benchmarks used the program’s current benefit structure and the 2001 OASDI Trustees’ intermediate, or best-estimate, assumptions. The benefit reductions were phased in between 2005 and 2035 to strike a balance between the size of the incremental reductions each year and the size of the ultimate reduction. At our request, SSA actuaries scored our benchmark policies and determined the parameters for each that would achieve 75-year solvency. For our benefit reduction scenarios, the actuaries determined these parameters assuming that disabled and survivor benefits would be reduced on the same basis as retired worker and dependent benefits. If disabled and survivor benefits were not reduced at all, reductions in retired worker benefits would be deeper than shown in this analysis. Future benefit levels and income adequacy will depend considerably on how any benefit reductions are made. Figure 7 shows the percentage of retired workers with Social Security benefits that fall below the official poverty threshold for various benchmarks. Note that this graph does not show poverty rates, which would require projections of total income;instead, it focuses only on Social Security benefits. The percentage with total incomes below the poverty threshold would be lower if other forms of retirement income were included. The figure shows that the percentage with benefits below the poverty threshold would be greater under a proportional benefit reduction than under a progressive benefit reduction. The proportional benefit-reduction-only benchmark would reduce benefits by the same proportion for all beneficiaries born in the same year. The progressive benefit-reduction-only benchmark would reduce benefits by a smaller proportion for lower earners and a higher proportion for higher earners. The tax-increase-only (no benefit reduction) benchmark estimates are shown for reference. Percent of cohort with Social Security benefits below poverty 1955 (2017) 1970 (2032) 1985 (2047) Birth year (age 62 year) Different approaches to reducing benefits would have different effects on income adequacy because their effects would vary with earnings levels. Smaller reductions for lower earners, who are most at risk of poverty, would decrease the chances that their benefits would fall below poverty. Figure 8 illustrates how different approaches would have benefit reductions that would vary by benefit levels (which are directly related to earnings). The proportional benchmark would reduce benefits by an identical percentage for all earnings levels. In contrast, the two alternative, progressive benchmarks would reduce benefits less for lower earners than for higher earners. The so-called “limited-proportional” benefit-reduction benchmark would be even more progressive than the progressive benefit- reduction benchmark because a portion of benefits below a certain level are protected from any reductions while reductions above that level are proportional. Moreover, different benefit reduction approaches would have varying effects on different beneficiary groups according to the variation in the typical earnings levels of those subgroups. For example, women, minorities, and never married individuals all tend to have lower lifetime earnings than men, whites, and married individuals, respectively. Therefore, benefit reductions that favor lower earners would help minimize adequacy reductions for such groups that typically have lower earnings. As our report also showed, the effects of some reform options parallel those of benefit reductions made through the benefit formula, and those parallels provide insights into the distributional effects of those reform options. For example, if workers were to retire at a given age, an increase in Social Security’s full retirement age results in a reduction in monthly benefits; moreover, that benefit reduction would be a proportional, not a progressive reduction. Another example would be indexing the benefit formula to prices instead of wages. Such a revision would also be a proportional reduction, in effect, because all earnings levels would be treated the same under such an approach. In addition, indexing the benefit formula to prices would implicitly affect future poverty rates. Since the official poverty threshold increases each year to reflect price increases and benefits would also be indexed to prices, poverty rates would not be expected to change notably, holding all else equal. In contrast, under the current benefit formula, initial benefit levels would grow faster on average than the poverty threshold and poverty rates would fall, assuming that wages increase faster than prices on average, as the Social Security trustees’ report assumes they will. Changes to the Social Security system should be made sooner rather than later—both because earlier action yields the highest fiscal dividends for the federal budget and because it provides a longer period for future beneficiaries to make adjustments in their own planning. The events of September 11 and the need to respond to them do not change this. It remains true that the longer we wait to take action on the programs driving long-term deficits, the more painful and difficult the choices will become. Today I have described GAO’s three basic criteria against which Social Security reform proposals may be measured: financing sustainable solvency, balancing adequacy and equity, and implementing and administering reforms. These may not be the same criteria every analyst would suggest, and certainly how policymakers weight the various elements may vary. But if comprehensive proposals are evaluated as to (1) their financing and economic effects, (2) their effects on individuals, and (3) their feasibility, we will have a good foundation for devising agreeable solutions, perhaps not in every detail, but as an overall reform package that will meet the most important of our objectives. Our recent report on Social Security and income adequacy showed that more progressive approaches to reducing monthly benefits would have a smaller effect on poverty, for example, than less progressive approaches. Also, reductions that protect benefits for survivors, disabled workers, and the very old would help minimize reductions to income adequacy, though they would place other beneficiaries at greater risk of poverty. More broadly, the choices the Congress will make to restore Social Security’s long-term solvency and sustainability will critically determine the distributional effects of the program, both within and across generations. In turn, those distributional effects will determine how well Social Security continues to help ensure income adequacy across the population. Still, such adequacy effects then need to be balanced against an assessment of the effects on individual equity. In addition, all adequacy measures depend significantly on what types of income are counted. In particular, noncash benefits such as Medicare play a major role in sustaining standards of living for their beneficiaries. Any examination of income adequacy should acknowledge the major role of noncash benefits and the needs they help support. In finding ways to restore Social Security’s long-term solvency and sustainability, the Congress will address a key question, whether explicitly or implicitly: What purpose does it want Social Security to serve in the future? to minimize the need for means-tested public assistance programs; to minimize poverty; using what standard of poverty; to replace pre-retirement earnings; to maintain a certain standard of living; or to preserve purchasing power? The answer to this question will help identify which measures of income adequacy are most relevant to examine. It will also help focus how options for reform should be shaped and evaluated. Our work has illustrated how the future outlook depends on both the measures used and the shape of reform. While the Congress must ultimately define Social Security’s purpose, our work has provided tools that inform its deliberations. Still, Social Security is only one part of a much larger picture. Reform proposals should be evaluated as packages that strike a balance among their component parts. Furthermore, Social Security is only one source of income and only one of several programs that help support the standard of living of our retired and disabled populations. All sources of income and all of these programs should be considered together in confronting the demographic challenges we face. In addition to Social Security, employer- sponsored pensions, individual savings, Medicare, employer-provided health benefits, earnings from continued employment, and means-tested programs such as SSI and Medicaid all should be considered, along with any interactions among them. In particular, compared to addressing our long-range health care financing problem, reforming Social Security is easy lifting. We at GAO look forward to continuing to work with this Committee and the Congress in addressing these important issues. Mr. Chairman, members of the Committee, that concludes my statement. I’d be happy to answer any questions you may have. For information regarding this testimony, please contact me at (202) 512-7215. Individuals making key contributions to this testimony include Ken Stockbridge, Charles Jeszeck, Alicia Cackley, Jay McTigue, Linda Baker, and Melissa Wolf. Social Security: Program’s Role in Helping Ensure Income Adequacy (GAO-02-62, Nov. 30, 2001). Social Security Reform: Potential Effects on SSA’s Disability Programs and Beneficiaries (GAO-01-35, Jan. 24, 2001). Social Security Reform: Evaluation of the Nick Smith Proposal. (GAO-AIMD/HEHS-00-102R, Feb. 29, 2000). Social Security Reform: Evaluation of the Gramm Proposal (GAO/AIMD/HEHS-00-71R, Feb. 1, 2000). Social Security Reform: Information on the Archer-Shaw Proposal (GAO/AIMD/HEHS-00-56, Jan. 18, 2000). Social Security: The President’s Proposal (GAO/T-HEHS/AIMD-00-43, Nov. 9, 1999). Social Security: Evaluating Reform Proposals (GAO/AIMD/HEHS-00-29, Nov. 4, 1999). Social Security Reform: Implications of Raising the Retirement Age (GAO/HEHS-99-112, Aug. 27, 1999). Social Security: Issues in Comparing Rates of Return With Market Investments (GAO/HEHS-99-110, Aug. 5, 1999). Social Security: Implications of Private Annuities for Individual Accounts (GAO/HEHS-99-160, July 30, 1999). Social Security: Capital Markets and Educational Issues Associated with Individual Accounts (GAO/GGD-99-115, June 28, 1999). Social Security Reform: Administrative Costs for Individual Accounts Depend on System Design (GAO/HEHS-99-131, June 18, 1999). Social Security Reform: Implementation Issues for Individual Accounts (GAO/HEHS-99-122, June 18, 1999). Social Security: Criteria for Evaluating Social Security Reform Proposals (GAO/T-HEHS-99-94, Mar. 25, 1999).
This testimony discusses the long-term viability of the Social Security program. Social Security's Trust Funds will not be exhausted until 2038, but the trustees now project that the program's cash demands on the rest of the federal government will begin much sooner. Aiming for sustainable solvency would increase the chance that future policymakers would not have to face these difficult questions on a recurring basis. GAO has developed the following criteria for evaluating Social Security reform proposals: financing sustainable solvency, balancing adequacy and equity, and implementing and administering reforms. These criteria seek to balance financial and economic considerations with benefit adequacy and equity issues and the administrative challenges associated with various proposals. GAO's recent report on Social Security and income adequacy (GAO-02-62) makes three key points. First, no single measure of adequacy provides a complete picture; each measure reflects a different outlook on what adequacy means. Second, given the projected long-term financial shortfall of the program, it is important to compare proposals to both benefits at currently promised levels and benefits funded at current tax levels. Third, various approaches to benefit reductions would have differing effects on adequacy.
Four components within FDA have primary responsibility for ensuring the safety and effectiveness of generic drugs. Among other activities, these components evaluate the safety and effectiveness of generic drugs prior to marketing, monitor the safety and effectiveness of marketed products, oversee the advertising and promotion of marketed products, formulate regulations and guidance, set research priorities, and communicate information to industry and the public. The Center for Drug Evaluation and Research (CDER) is responsible for overseeing drugs and certain therapeutic biologics. It coordinates federal government efforts to ensure the safety and efficacy of generic drugs as well as new and over-the-counter drugs. The Center for Biologics Evaluation and Research (CBER) is responsible for overseeing generic drug applications for biologics, which are products such as blood, vaccines, and human tissues. These products make up a smaller proportion of the generic drug applications than those reviewed by CDER. The Office of Regulatory Affairs (ORA) is responsible for conducting field activities for all of FDA’s medical product centers, which include CDER and CBER, such as inspections of domestic and foreign establishments involved in manufacturing medical products. FDA headquarters (HQ), specifically the Office of the Commissioner, includes several offices that perform a variety of activities that contribute to indirect costs related to the GDUFA program. FDA HQ provides agency level shared services; policy, financial, and legal support; and other overhead support that is provided to all FDA programs and activities. Within CDER, the Office of Generic Drugs (OGD) is responsible for providing regulatory oversight and strategic direction for FDA’s generic drug program to expedite the availability of safe, effective, and high- quality generic drugs to patients. These activities include reviewing generic drug applications, which comprises such actions as examining bioequivalence data and evaluating proposed drug labeling. In addition, the Office of Pharmaceutical Quality, also within CDER, is responsible for examining chemistry-related data and providing quality control across all manufacturing sites, whether domestic or foreign, across all drug product areas. FDA begins review of a generic drug when a generic drug applicant submits an Abbreviated New Drug Application (ANDA). Generic drug applications are termed “abbreviated” by FDA because they are generally not required to include preclinical study data (studies involving animals) and clinical trial data (studies involving humans) to establish safety and effectiveness. Because generic drugs must be bioequivalent to a brand- name drug already approved by the FDA, and because animal studies and clinical trials have already been conducted for the brand-name drug, generic drug applicants do not need to repeat these animal or human studies. FDA’s approval of a drug’s application is required before a generic drug can be marketed for sale in the United States. FDA will meet the performance goals outlined in its Commitment Letter when it completes its review and issues an action letter for a specified percentage of applications within a designated period of time. An action letter is an official statement informing a drug applicant of the agency’s decision about an application review. FDA can issue four types of action letters: Refuse to receive letter: FDA issues a refusal to receive letter when it determines that an application is not sufficiently complete to permit a substantive review. Approval letter: FDA issues an approval letter to an applicant when the agency has concluded its review of a generic drug application and the applicant is authorized to commercially market the drug. Tentative approval letter: FDA issues a tentative approval letter when the agency has completed its review of an application, but patents or other exclusivities for the original, brand-name product prevent approval. A tentative approval letter does not allow the applicant to market the generic drug product until the related patents and other exclusivities no longer prevent approval. Complete response letter: FDA issues a complete response letter to an applicant at the completion of a full application review where deficiencies are found—the complete response letter describes any deficiencies that must be corrected in order for an application to be approved. In order to close the review cycle, FDA must complete its review and issue an approval, tentative approval, or a complete response letter. If the agency has not issued one of these letters, the application is considered a pending application. (See fig. 1 for a summary of the generic drug application review process.) Once an ANDA is filed, an FDA review team—medical doctors, chemists, statisticians, microbiologists, pharmacologists, and other experts— evaluates whether scientific data in the application demonstrates that the drug product meets the statutory and regulatory standards for approval. For example, applicants must, in most cases, demonstrate that the generic product, in relation to an already-approved brand-name drug contains the same active ingredient(s); is identical in strength, dosage form, and route of administration; is labeled for conditions of use approved for the brand-name drug; is bioequivalent to an already-approved brand-name drug; meets the same requirements for identity, strength, purity, and quality; is manufactured under the same strict standards of FDA’s good manufacturing practice regulations required for brand-name products. The application must also contain information to show that, with permitted deviations, it has the same labeling as the brand-name product. FDA communicates with applicants when issues arise during its review of an application that may prevent the agency from approving the application. In response, applicants can submit additional information to FDA in the form of amendments to the original application. FDA review time for an original application (i.e., the first review cycle) is calculated as the time elapsed from the date FDA receives the application and associated user fee to the date it issues an action letter. The application review process will also be closed if the application is withdrawn by the applicant. The date on which one of these actions occurs is used to determine whether the review cycle was completed within FDA’s committed time frames. If FDA issues a complete response letter listing the deficiencies in the application that were encountered by FDA’s reviewers, the applicant may choose to submit a revised application to FDA. Resubmissions and their review are covered under the user fee paid with the original submission, but a new review cycle with its own performance goals is started. If an applicant wants to change any part of its original ANDA after its approval—such as changes to the manufacturing location or process, the type or source of active ingredients, or the labeling—it must submit an application supplement to notify FDA of the change. If the change has a substantial potential to adversely affect factors such as the identity, quality, purity, or potency of the drug, the applicant must obtain FDA approval for the change through submission of a Prior Approval Supplement (PAS). Outside of the application review process, FDA also responds to correspondence submitted to the agency by, or on behalf of, an applicant requesting information on a specific element of generic drug product development. This communication between FDA and applicants, known as controlled correspondence, allows applicants to submit formal questions seeking the agency’s input on issues capable of affecting the review of a product prior to their submission of an ANDA. The generic drug program is supported by both regular appropriations and generic drug user fee appropriations. Generic drug user fees provide funds to FDA to support its efforts to review applications for generic drugs in a timely manner. GDUFA authorizes FDA to collect from the generic drug industry $299 million in user fees annually through September 30, 2017, adjusted for inflation, to supplement the regular appropriations the agency receives to support the generic drug program. GDUFA established several types of fees associated with generic drug products that together generate the annual collections. These include fees for (1) ANDAs in the backlog as of October 1, 2012 (assessed in fiscal year 2013 only); (2) ANDAs and PASs submitted after October 1, 2012; (3) facilities where active pharmaceutical ingredients and finished dosage forms are produced; and (4) Drug Master Files that are associated with generic drug products. Although GDUFA authorizes FDA to collect user fees, the law specifies that the total amount of user fees collected for a fiscal year be provided in appropriations acts. FDA establishes the generic drug user fee collection amounts annually to generate revenue levels as specified in appropriations acts. Once appropriated and collected, user fees are available for obligation by FDA until expended. As a result, any user fees not obligated in the fiscal year in which they were appropriated and collected may be carried over into subsequent fiscal years (referred to as a carryover). In addition, GDUFA specifies that user fees may be spent only for generic drug activities, including the costs of reviewing and approving generic drug applications. Each year as part of the annual appropriations process, FDA develops and submits supporting information for its funding request, which includes user fee and non-user fee funds, in the budget justification that is submitted to the congressional appropriations committees. This information reflects how FDA proposes to meet its mission, goals, and objectives, and it assists Congress in determining how much funding to appropriate for FDA. This information also includes an estimate of how many generic drug applications are likely to be received in the coming year. Authority for the generic drug user fee program established under GDUFA expires at the end of fiscal year 2017. During the 5-year period of the current authorization, FDA has been required to submit two annual reports to its oversight committees: (1) a report on the progress made and future plans toward achieving the performance goals identified in its Commitment Letter, and (2) a report on the financial aspects of FDA’s implementation efforts. These reports contain descriptions of relevant oversight activities over the previous year, data on FDA’s performance toward meeting the commitments, and information about how FDA addressed the implementation and use of the user fees over the previous year. In addition, to facilitate Congress’s reauthorization of the program, GDUFA requires FDA to develop recommendations to present to Congress with respect to the goals, and plans for meeting those goals, during fiscal year 2017 through fiscal year 2022. FDA is to develop these recommendations in consultation with various stakeholders, including the generic drug industry. FDA and generic drug industry stakeholders conducted negotiations regarding the program’s reauthorization, which is referred to as GDUFA II, in October 2015 through August 2016. According to FDA, the negotiated objectives of GDUFA II differ from the objectives of the original GDUFA agreement (i.e., GDUFA I). While the primary objective of GDUFA I was to restructure FDA’s generic drug program to improve the speed and predictability of reviews, the primary objective for GDUFA II, as outlined in the proposed Commitment Letter negotiated by FDA and the generic drug industry, is to improve the completeness of drug application submissions and reduce the number of review cycles. Other features include enhanced review pathways for complex drugs, enhanced accountability and reporting, and modifications to the user fee structure. GDUFA supported an 85 percent increase in total FDA obligations for its generic drug program in the first 4 years of implementation. User fees primarily supported generic drug application evaluation and review activities. FDA has accumulated carryover balances from unobligated user fee collections, but it lacks a plan for administering the carryover, which is inconsistent with best practices identified in our prior work on the management of user fees and federal internal control standards. Total obligations for FDA’s generic drug program (from both regular appropriations and generic drug user fee appropriations) increased by about 85 percent in the 4 years following GDUFA’s implementation, from about $267 million in fiscal year 2013 to about $494 million in fiscal year 2016. (See table 1.) FDA’s reliance on generic drug user fees increased throughout this period. Obligations from generic drug user fees grew in both absolute terms and as a share of total program obligations, from about $121 million (45 percent of total obligations) in fiscal year 2013 to about $373 million (76 percent of total obligations) in fiscal year 2016. In contrast, obligations from regular appropriations decreased as a share of total program obligations during this period, and while these regular appropriations increased in absolute terms from fiscal years 2013 to 2014, they declined afterwards. In the first 4 years of the generic drug user fee program, FDA obligated over $1 billion (about 70 percent) of the anticipated 5-year, $1.5 billion in user fee collections. CDER, the office with responsibility for drug evaluations and generic drug submission reviews, obligated the largest share of user fees—almost 70 percent—while the three other FDA components with responsibility for the generic drug program obligated smaller shares. Figure 2 shows cumulative user fee obligations by each FDA component and account in fiscal years 2013 through 2016. According to FDA officials, the percentage of user fees allocated to each of these four components did not vary much from year to year. However, FDA officials also reported that each year the agency allocated a percentage of the user fees to centrally managed accounts that are used for rent, utility costs, telecommunications, and other support costs such as information technology (IT) investments that support FDA programs and activities, but which do not necessarily align with the four offices supporting the generic drug program. User fee obligations from the centrally managed account were the second largest component of such obligations each fiscal year since the implementation of GDUFA, trailing only CDER. In addition, approximately 60 percent of cumulative user fee obligations supported non-personnel activities and about 40 percent supported personnel-related activities, such as employee salaries and benefits, in the first 4 years of GDUFA’s implementation (see table 2). In addition to IT investments and capital asset purchases, CDER also obligated funds for non-personnel activities such as consulting services to integrate GDUFA requirements into its new IT system and regulatory science projects with the potential to improve the development of generic drugs. (See Appendix I for more information on GDUFA-supported regulatory science projects.) User fee obligations for personnel-related activities increased from fiscal year 2013 to fiscal year 2016, both in absolute numbers and as a percentage of total user fee obligations. Personnel-related obligations increased sharply in the first 4 years of GDUFA, from about $18 million (14 percent) of all user fees obligated in fiscal year 2013, to about $181 million (49 percent) in fiscal year 2016. According to FDA officials, the increase in personnel-related spending was due, in part, to hiring for the generic drug program consistent with the agency’s GDUFA goals. FDA has accumulated a large unobligated user fee carryover balance, which it uses as an operating reserve. At the beginning of fiscal year 2017, FDA had a carryover of approximately $174 million. During the first two years of GDUFA’s implementation, FDA had obligated about half of the user fees it had collected and thereby amassed a cumulative carryover of about $278 million by the end of fiscal year 2014—an amount nearly as great as the annual, inflation-adjusted user fee collection amount of $299 million. In fiscal years 2015 and 2016, FDA’s program obligations from user fees exceeded the amount collected and FDA used part of the carryover to make up for the gap. (See table 3.) In fiscal year 2015, FDA obligated about $47 million from its carryover in addition to its total generic drug user fee appropriation for that year, and in fiscal year 2016, the agency obligated about $58 million from its carryover in addition to its total generic drug user fee appropriation for that year, yielding a carryover balance of about $174 million. Despite the large carryover amounts, FDA has not developed a planning document on how it will administer its carryover—one that includes a fully documented analysis of program costs and risks to ensure that its carryover reflects expected operational needs and probable contingencies. Although FDA uses an internal management report to show GDUFA collection amounts, obligations, and end-of-year carryover amounts, the agency was unable to produce evidence describing whether the carryover of $174 million at the beginning of fiscal year 2017 (or carryover amounts in other years) was within a targeted goal, and it does not have targets for future years in general. We have previously found that when unobligated balances are used as carryover, it is important for entities to establish a target range for the carryover to ensure user fee resources are used efficiently and responsibly and that the amounts carried over into the following year are reasonable to meet program needs, risks, and probable contingencies. In addition, the lack of such a planning document is inconsistent with federal internal control standards, which state that management should have a control system in place to communicate necessary quality information externally to help the agency achieve its objectives and address related risks. During GDUFA II negotiations, FDA officials acknowledged the need for more fiscal transparency and accountability and announced plans to build financial systems to facilitate operational and fiscal efficiency and reporting. However, agency officials also stated that their internal management report, which is used to report its user fee cash flows, was sufficient to analyze the program’s needs. By not developing a planning document FDA cannot effectively communicate to external stakeholders the amount of its carryover balance and its plans for using it. These external stakeholders include Congress, which determines the total amount of generic drug user fees FDA is to collect through the annual appropriations process. Likewise, Congress also considers GDUFA reauthorization levels based on recommendations that FDA is to develop in consultation with other external stakeholders, who do not have access to FDA’s internal reports. FDA has made changes to the generic drug program since the enactment of GDUFA in order to establish a better-managed drug application review process. In response to stakeholder input, FDA incorporated additional changes to the application review process. Key changes to FDA’s generic drug review process have included revising its organizational infrastructure, upgrading IT, and formalizing external communication processes. To enable FDA to meet the evolving needs of application review and GDUFA performance goals, the agency undertook steps to reorganize OGD in December 2013. Specifically, under this reorganization OGD was elevated to the status of a “super office” within CDER, providing OGD with centralized administrative support and its own governance structure. Under the reorganization, OGD was given the responsibility to coordinate and manage the application review process; provide safety, surveillance, clinical, and bioequivalence reviews for generic products; and develop policy and regulatory science for generic drugs. Four subordinate offices within OGD were created to fulfill these responsibilities: the Office of Research and Standards, the Office of Bioequivalence, the Office of Regulatory Operations, and the Office of Generic Drug Policy. Additionally, in January 2015 FDA established a new Office of Pharmaceutical Quality—another “super office” within CDER—to provide better alignment among all drug quality review functions, including application reviews, inspections, and research. As part of its GDUFA hiring goals, FDA planned to hire approximately 923 new staff by the end of fiscal year 2015 and to this end established incremental goals to hire 231 (25 percent), 462 (50 percent), and 231 (25 percent) of the 923 total new staff during the first 3 fiscal years of the program, respectively. However, FDA surpassed the targeted goal in total, hiring nearly 1,200 new staff over the 3 year period (227, 562, and 387 in each fiscal year, respectively). (See fig. 3.) Additionally, in fiscal year 2016 FDA hired 346 new staff to support the GDUFA program, although there was no associated GDUFA hiring goal for that fiscal year. The majority of these new hires support generic drug program evaluation activities within CDER, with additional hires distributed across other FDA offices. To fulfill its GDUFA commitments, FDA established a new Informatics Platform to improve the efficiency of application reviews and to support the agency’s ability to track its performance. Prior to rolling out this platform in fiscal year 2015, FDA officials said that the agency used many disparate and disconnected databases, systems, and spreadsheets to manage different aspects of the generic drug review program. Officials said that the new platform integrates all these review functions, tracking the work flow from the time FDA receives an application until a decision is made on it. Furthermore, officials told us the platform can provide information on an application as it proceeds through the review process, and agency officials can also track when each assignment related to this process has been completed. Backlog applications are also tracked by the platform. During the initial launch, officials said that the platform was able to track the application review process and its related work flow, including analytics and metrics reporting. Since then, FDA has also incorporated data about facility inspection decisions as well as provided access to historical reviews and actions taken by FDA on generic drug applications submitted prior to fiscal year 2015. FDA officials said that the agency has plans to expand the use of the platform in the future to track additional incoming submissions, such as reports of drug shortages and applications for brand-name drugs. According to FDA officials, the launch of the platform has provided the following benefits to FDA and to stakeholders: (1) faster review times, (2) easier collaboration and communication with industry, (3) improved application review consistency, and (4) more predictable application review times and target completion dates due to having all pertinent information regarding an application in one location that is accessible to all FDA reviewers. As part of FDA’s GDUFA commitments to streamline the review process, the agency took steps to change how it communicates with generic drug applicants. According to FDA officials, prior to GDUFA, deficiency letters were sent to applicants by specific types of reviewers within CDER—such as labeling reviewers or chemistry reviewers—in an uncoordinated fashion, making it difficult for applicants to obtain a comprehensive picture of their application’s status. Starting in fiscal year 2013, FDA committed to consolidating all application deficiencies into one letter to the applicant, called a complete response letter. According to FDA officials, this systemic approach to provide a single deficiency letter was one of the biggest changes the agency made to the review process since the enactment of GDUFA. Additionally, in September 2013, OGD issued guidance that established regulatory project managers as the central point of contact for each application throughout its lifecycle. FDA officials said that regulatory project managers are also responsible for overseeing the review of each application across all disciplines. Prior to GDUFA, FDA officials said that no one individual at the agency had responsibility for an application throughout its entire lifecycle. FDA has also issued new and revised guidance documents to educate applicants about changes to the generic drug application review process made in response to GDUFA. According to FDA officials, since fiscal year 2013 FDA has issued 31 new and revised draft or final guidance documents related to the application review process. Additionally, officials said that FDA has also issued almost 600 new or revised product-specific recommendations to assist the applicants with identifying the most appropriate methodology for developing generic drugs and generating evidence needed to support application approval. FDA has made additional refinements to its application review program in response to applicant concerns about program changes. Specifically, to provide more timely communication on individual applications, FDA instituted a mechanism known as real time communication in late 2014. As described above, as one of its initial changes, FDA identified all deficiencies found in its review of an application in a single, complete response letter to the applicant. However, applicants told us that they found that only providing information in the complete response letters resulted in fewer informal communication opportunities with FDA and requested that FDA send information concerning individual application deficiencies on a rolling basis so that they could address deficiencies in real time. FDA acknowledged that the initial changes to how it communicated to applicants made it harder for applicants to assess the status of their applications and to plan the market launch of generic medicines. To remedy applicants’ concerns and to enhance the review process by increasing transparency and decreasing the number of review cycles, FDA implemented real-time communications—specifically “information requests” and “easily correctible deficiencies.” According to FDA officials, information requests are used by application reviewers to informally notify applicants of any preliminary application concerns as well as to seek resolution and clarification on some of the minor issues related to the application. Similarly, application reviewers communicate easily correctible deficiencies to applicants to obtain missing information that should be readily available, to seek clarification of data already submitted, or to request final resolution of technical issues. FDA data show that the numbers of information requests and easily correctible deficiencies substantially increased in the first 9 months following their inception in early 2015 and have decreased somewhat since then, suggesting that both FDA and applicants have found these new forms of communication useful. (See fig. 4.) FDA also revised its policy outlining the roles and responsibilities of various FDA staff in communicating with applicants during the application review process in August 2015. FDA made this revision to address stakeholder concerns that establishing regulatory project managers as the primary point of contact for all communication with applicants limited the opportunities for informal communication with FDA staff directly responsible for reviewing applications. The revised policy expanded and set forth responsibilities and procedures for communications between FDA staff and applicants concerning the review status of applications. Specifically, OGD’s regulatory project managers would remain responsible for communicating the review status of applications they manage, including transmitting complete response letters, while discipline project managers and Office of Pharmaceutical Quality regulatory business process managers—both of whom are more directly involved in an application’s review—would be responsible for issuing all information requests and easily correctible deficiencies. According to FDA officials, these changes were made as part of a broader effort to bring communications between the agency and applicants closer to “real-time.” Lastly, FDA has communicated target action dates to applicants since 2015 at the request of industry stakeholders. Target action dates are the agency’s internal deadlines for taking action on generic drug applications pending with the agency on October 1, 2012, and those submitted in fiscal years 2013 and 2014 (years for which there were no GDUFA performance goals established). According to FDA, although GDUFA did not require the agency to establish and communicate target action dates to applicants, these dates help applicants plan product launches, which promotes timely access to generic drugs. FDA’s review times for generic drug applications have decreased since the implementation of GDUFA, with FDA surpassing multiple fiscal year 2015 GDUFA performance goals as described in table 4. With respect to ANDA review times, the average time for FDA to complete the first review cycle decreased from 26 months for ANDAs submitted in fiscal year 2013 to about 14 months for those submitted in fiscal year 2015. Additionally, the dispersion in review times has decreased. (See fig. 5.) However, as of December 31, 2016, 929 ANDAs (34 percent) submitted since the start of the generic drug user fee program in fiscal year 2013 were still pending review. As these applications are reviewed, the average review time and the dispersion of review times for each fiscal year will increase since all of the applications that remained to be acted on are at least 15 months old. As of December 31, 2016, FDA had also acted on 89 percent of all ANDAs submitted in fiscal year 2015 within 15 months of receipt, exceeding its GDUFA goal of acting on 60 percent of ANDAs received in fiscal year 2015 within 15 months. For PASs, the average time for FDA to complete the first review cycle also declined from 12 months in fiscal year 2013 to 4.5 months in fiscal year 2015. Additionally, the dispersion in review times decreased, as shown in figure 6. FDA acted within 6 months on 95 percent of the PASs it received in fiscal year 2015 not requiring a facility inspection, exceeding its GDUFA goal of acting on 60 percent of PASs received within 6 months of receipt. For PASs received in fiscal year 2015 that required a facility inspection, FDA acted on 92 percent within 10 months, again exceeding its GDUFA goal of acting on 60 percent within 10 months of receipt. FDA has also met performance goals related to controlled correspondences and its review of backlogged applications. As of December 31, 2016, FDA acted on 97 percent of the 1,519 controlled correspondences it received in fiscal year 2015 within 4 months, exceeding its GDUFA goal of acting on 70 percent within 4 months of receipt. FDA was unable to provide more detailed information on the review times for controlled correspondences submitted in the first 2 years of the program; however, according to FDA officials, 2,027 of the 2,040 correspondences submitted in that period have been reviewed as of August 2016. In addition, as of December 31, 2016, FDA had acted on 92 percent of the 4,743 applications in the backlog pending review as of October 1, 2012, exceeding its GDUFA goal of acting on 90 percent of such applications before the end of fiscal year 2017. Fifty-eight percent of these applications were approved; approximately 20 percent were withdrawn by the applicant; and the applicants for the remaining 12 percent received a complete response letter. As with any appropriations, the user fee funding provided to FDA brings with it a great responsibility to manage funds in a way that demonstrates prudent stewardship of resources. FDA has used its user fee funding to enhance OGD’s ability to increase hiring, and undertake numerous activities to improve and speed-up the review of generic drug applications. However, at the end of each fiscal year FDA has had large GDUFA carryover balances, and FDA has not fully documented the underlying assumptions for the size of the carryover, including anticipated obligations. Such a documented plan could aid Congress in determining the appropriate amount of user fees to be collected by the agency during the annual appropriations process and when considering a reauthorization of the user fee program. Making this information publicly available would also help to ensure that FDA’s recommendations to Congress are fully informed by the views of external stakeholders with whom FDA has an obligation to consult. To ensure efficient use of generic drug user fees, facilitate oversight and transparency, and plan for risks, we recommend that the Commissioner of FDA develop a plan for administering user fee carryover that includes analyses of program costs and risks and reflects actual operational needs and contingencies. We provided a draft of this report to HHS for comment. In its written comments, which are reproduced in appendix II, HHS concurred with our recommendation. HHS agreed that it should incorporate an analysis of program risks and contingencies into its existing 5-year financial planning process and that it will review appropriate actions. HHS also provided technical comments, which we incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of HHS, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact John E. Dicken at (202) 512-7114 or [email protected]. Contact points for our Office of Congressional Relations and Office of Public Affairs can be found on the last page of this report. Other major contributors to this report are listed in appendix III. Regulatory science projects support the development of generic drugs in part by helping to address gaps the agency faces between rapid changes in science and technology and the agency’s capacity to regulate those technologies. We previously reported that the Food and Drug Administration (FDA) traditionally funds regulatory science projects with resources from the agency’s regular appropriations, but projects funded within the Center for Drug Evaluation and Research (CDER) have also been supplemented by funds collected from user fee acts, such as the Prescription Drug User Fee Act and the Generic Drug User Fee Amendments of 2012 (GDUFA). GDUFA funds accounted for approximately $20 million in annual obligations for regulatory science projects from fiscal years 2013 through 2016. The Office of Generic Drugs (OGD), located within CDER, established the GDUFA Regulatory Science Research Program to support projects that could potentially enhance the development of generic drugs. In fiscal year 2013, OGD created a list of GDUFA Regulatory Science Priorities that includes five broad priority areas (see table 5). The list is based in part on discussions with stakeholders during annual public meetings on GDUFA and in part on stakeholder comments to FDA’s GDUFA regulatory science funding announcements. However, generic drug industry stakeholders have raised concerns about the need for more meaningful scientific dialogue about FDA’s requirements for the development of generic drugs, and for more clarity on how the agency develops its regulatory science priorities list. FDA officials we interviewed were cognizant of stakeholders’ concerns and noted that they have reconsidered the five broad categories each year. However, officials told us they have not made changes to the priority areas since they were established in fiscal year 2013 and do not envision annual changes to the broad categories, though the sub-priority areas in each category have been revised from year to year. The officials acknowledged that while some information about their process for judging the merit of regulatory science proposals is public, details about how proposals are scored is not public in order to promote a process whereby applications submitted are evaluated free of bias from stakeholders. The officials explained that the priorities list is created internally at FDA in part to protect the privacy of applicants that submitted proposals, to avoid potential conflicts of interest from stakeholders who may want to steer research away from competitors, and to incorporate internal FDA capabilities to conduct research at lower costs. Although the officials said that decisions about which projects to fund is an inherently governmental function and should be made internally to support public health, they noted plans to improve communications with industry about the priorities list and plan to more clearly announce the sub-priority areas available for research on the OGD webpage. In the first 4 years following GDUFA’s implementation, OGD obligated a cumulative amount of about $77 million from generic drug user fee collections for 103 regulatory science projects. Data provided by FDA showed that 15 of these projects supported research into post-market evaluation of generic drugs, 30 projects supported the equivalence of complex drug products, 20 projects supported the equivalence of locally acting products, 20 supported therapeutic equivalence evaluation and standards, and 18 supported cross-cutting computational and analytical tools. Obligations from user fees for regulatory science projects remained relatively stable each year, from a low of $16 million in fiscal year 2016 to a high of $24 million in fiscal year 2015. (See table 6). According to FDA officials, the agency is not required by law or otherwise committed to spend a certain amount on regulatory science projects, but agency officials anticipate funding projects at similar levels in future years. John E. Dicken, (202) 512-7114 or [email protected]. In addition to the contact named above, individuals making key contributions to this report include Robert Copeland (Assistant Director); Carolina Morgan (Analyst-in-Charge); Enyinnaya David Aja; Nick Bartine; Taylor Dunn; Sandra George; Laurie Pachter; and Said Sariolghalam. Muriel Brown and Laurel Plume also made contributions to this report.
Nearly 90 percent of prescription drugs dispensed in the United States are generic drugs. According to FDA, an increasing volume of generic drug applications over the past decades stressed its ability to review applications efficiently. GDUFA granted FDA the authority to collect user fees from the generic drug industry to supplement resources for the generic drug program. In return, FDA committed to meeting certain performance goals related to the timely review of generic drug applications and to implementing review process improvements. GAO was asked to examine FDA's implementation of GDUFA. In this report, GAO (1) examines how user fees supported the generic drug program, (2) describes FDA's improvements to the generic drug application review process, and (3) analyzes changes in generic drug application review times. GAO reviewed laws and regulations; FDA policy, guidance, the GDUFA Commitment Letter, and GDUFA financial reports from fiscal years 2013 through 2016; FDA data on application review times from fiscal years 2012 through 2015; and interviewed officials from FDA, generic drug manufacturers, and trade associations. Since the enactment of the Generic Drug User Fee Amendments of 2012 (GDUFA), the Food and Drug Administration's (FDA) reliance on user fees has increased from $121 million in fiscal year 2013 to $373 million in fiscal year 2016, or 45 percent of total program obligations in fiscal year 2013 to 76 percent in fiscal year 2016. FDA carried over $174 million in unobligated user fees at the end of the fourth year of the GDUFA 5-year period. GAO found that although FDA uses an internal management report to track user fee cash flows for internal purposes, it lacks a plan for administering its carryover—one that includes a fully-documented analysis of program costs and risks to ensure that program operations can be sustained in case of unexpected changes in collections or costs. GAO previously found that it is important for entities with carryover to establish appropriate target amounts based on program needs, risks, and contingencies. FDA's approach is inconsistent with best practices for managing federal user fees. Without a carryover plan, FDA lacks reasonable assurance that the size of its carryover is appropriate to ensure the efficient and responsible use of resources. Dollars in millions FDA took steps to improve the timeliness and predictability of generic drug application reviews. FDA restructured the generic drug program by building a more robust organizational infrastructure, upgrading information technology systems, and implementing communication reforms. As FDA implemented these changes, it made additional refinements in response to applicants' feedback. Generic drug application review times have improved under GDUFA. FDA's review time for a new generic drug application (known as an Abbreviated New Drug Application (ANDA)) decreased from 28 months for applications submitted in fiscal year to 2012 to about 14 months for those submitted in fiscal year 2015. FDA also surpassed multiple GDUFA performance goals. For example, FDA committed to reviewing 60 percent of ANDAs submitted in fiscal year 2015 within 15 months of their receipt. GAO found that FDA had taken action on 89 percent of such ANDAs for which it committed to conducting a substantive review by December 31, 2016, thereby surpassing this goal. GAO recommends that FDA develop a plan for administering user fee carryover that includes analyses of program costs and risks and reflects operational needs and contingencies. HHS agreed with GAO's recommendation.
Freight rail is an important component of our nation’s economy. Approximately 42 percent of all inter-city freight in the United States, measured in ton miles, moves on rail lines. Freight rail is particularly important to producers and users of certain commodities. For example, about 70 percent of automobiles manufactured domestically, about 70 percent of coal delivered to power plants, and about 32 percent of grain moves on freight rail. Beginning in 1887, the Interstate Commerce Commission (ICC) regulated almost all of the rates that railroads charged shippers. Congress passed the Railroad Revitalization and Regulatory Reform Act in 1976 and the Staggers Rail Act in 1980, and these acts greatly increased the reliance on competition in the railroad industry. Specifically, these acts allowed railroads and shippers to enter into confidential contracts which set rates and prohibited the ICC from regulating rates where railroads had effective competition or if the rates had been negotiated between the railroad and the shipper. The ICC Termination Act of 1995 abolished the ICC and transferred its regulatory functions to STB. Taken together, these acts anchor the federal government’s role in the freight rail industry and have established numerous goals for regulating the industry, including the following: to allow, to the maximum extent possible, competition and demand for services to establish reasonable rates for transportation by rail. to minimize the need for federal regulatory control over the rail transportation system and to require fair and expeditious regulatory decisions when regulation is required. to promote a safe and efficient rail transportation system by allowing rail carriers to earn adequate revenues, as determined by STB. to ensure effective competition among rail carriers and with other modes to meet the needs of the public. to maintain reasonable rates where there is an absence of effective competition and where rail rates provide revenues which exceed the amount necessary to maintain the rail system and to attract capital. to prohibit predatory pricing and practices, to avoid undue concentrations of market power; and to provide for the expeditious handling and resolution of all proceedings. Two important components of the current regulatory structure are the concepts of revenue adequacy and demand-based differential pricing. Congress established the concept of revenue adequacy as an indicator of the financial health of the industry. STB determines the revenue adequacy of a railroad by comparing the railroad’s return on investment with the industrywide cost of capital. If a railroad’s return on investment is greater than the industry-wide cost of capital, STB determines that railroad to be revenue adequate. Historically, the ICC and STB have rarely found railroads to be revenue adequate, which many observers relate to characteristics of the industry’s cost structure. Railroads incur large fixed costs to build and operate networks that jointly serve many different shippers. While some fixed costs can be attributed to serving particular shippers, and some costs vary with particular movements, other costs are not attributable to particular shippers or movements. Nonetheless, a railroad must recover these costs if the railroad is to continue to provide service over the long run, and, to the extent that railroads have not been revenue adequate, this may indicate that they are not fully recovering these costs. Consequently, the Staggers Rail Act recognized the need for railroads to use demand-based differential pricing in the deregulated environment. Demand-based differential pricing in theory permits a railroad to recover their joint and common costs across its entire traffic base by setting higher rates for traffic with fewer transportation alternatives than for traffic with more alternatives. This means that a railroad might incur similar incremental costs in providing service to two different shippers that ship similar tonnages in similar car types traveling over similar distances, but that the railroad may charge quite different rates. In this way, the railroad recovers a greater portion of its joint and common costs from the shipper that is more dependent on railroad transportation, but, to the extent that the railroad is able to offer lower rates to the shipper with more transportation alternatives, the other shipper makes a contribution toward those costs. The Staggers Rail Act further required that the railroads’ need to differentially price its services be balanced with the rights of shippers to be free from, and to seek redress from unreasonable rates. Railroads incur variable costs—that is the costs of moving particular shipments—in providing service. The Staggers Rail Act stated that any rate that was found to be above 180 percent of a railroad’s variable cost for a particular shipment was potentially an unreasonable rate and granted the ICC, and later the STB, the authority to establish a rate relief process. In response, the ICC established two criteria for allowing a rail rate case. First, as stated in law, the rate had to be above 180 percent of the revenue-to- variable-cost (R/VC) ratio. Second, the shipper had to demonstrate that it had no other reasonable transportation alternative. Such a shipper is referred to as a “captive shipper.” The changes that have occurred in the railroad industry since the enactment of the Staggers Rail Act are widely viewed as positive. The railroad industry’s financial health improved substantially as it cut costs, boosted productivity, and “right-sized” its networks. Rates generally declined between 1985 and 2000 but increased slightly from 2001 through 2004. Concerns about competition and captivity in the industry remain because traffic is concentrated in fewer railroads and, although rates have declined for most shippers, some shippers are paying significantly higher rates than others. While it is difficult to precisely determine the number of shippers who are “captive” to one railroad, our preliminary analysis indicates that while the extent of potential captivity may be dropping, the share of potentially captive shippers who are paying the highest rates— those substantially above the threshold for rate relief—has increased. There is widespread consensus that the freight rail industry has benefited from the Staggers Rail Act. Specifically, various measures indicate an increasingly strong freight railroad industry. Freight railroads’ improved financial health is illustrated by increases in productivity, volume of shipments, and stock prices. Freight railroads have also cut costs by streamlining their workforce and “right-sizing” their rail network, through which the railroads have reduced track, equipment, and facilities to more closely match demand. These measures are shown in figure 1. Freight railroads have also expanded their business into new markets – such as the intermodal market - and implemented new technologies, including larger cars, and are currently developing new scheduling and train control systems. Some observers believe that the competition faced by railroads from other modes of transportation has created incentives for innovative practices, and that the ability to enter into confidential contracts with shippers has permitted railroads to make specific investments and to develop service arrangements tailored to the requirements of different shippers. Rail rates across the industry have generally declined since enactment of the Staggers Rail Act. Because changes in traffic patterns over time (for example, hauls over longer distance) can result in increases in lower priced traffic and a decrease in average revenue per ton mile, it can present misleading rate trends. Therefore, we developed a rail rate index to examine trends in rail rates over the 1985-2004 period. These indexes account for changes in traffic patterns over time which could affect revenue statistics but do not account for inflation. As a result, we have also included the price index for the gross domestic product. Although there has been a slight upturn in rates from 2001 through 2004, the industry continues to experience rates that are generally lower than they were in 1985. During this time some costs have also been passed on to shippers, such as having shippers provide equipment. There was a steep decline in rates from 1985 to 1987 when rates dropped by 10 percent. Rates continued to decline, although not as steeply, through 1998. Rates increased in 1999, then dropped again in 2000. In 2001 and 2002 rates rose again. Rates were nearly flat in 2003 and 2004, finishing approximately 3 percent above rates in 2000, but 20 percent below 1985 rates. This is shown in figure 2. These data include rates through 2004. According to freight railroad officials, shippers, and financial analysts, since 2004 rates have continued to increase as the demand for freight rail service has increased, rail capacity has become more limited, and as a result, freight railroad companies have gained increased pricing power. A number of factors may have contributed to recent rate increases. Ongoing industry and economic changes have influenced how railroads have set their rates. Since the Staggers Rail Act was enacted, the railroad industry and the economic environment in which it operates have changed considerably. Not only has the rail industry continued to consolidate, potentially increasing market power by the largest railroads, but after years of reducing the number of its employees and shedding track capacity, the industry is increasingly operating in a capacity-constrained environment where demand for their services exceeds their capacity. In addition, the industry has more recently increased employment and invested in increased capacity in key traffic corridors. Additionally, changes in broader domestic and world economic conditions have led to changes in the mix and profitability of traffic carried by railroads. Concerns about competition and captivity in the railroad industry remain because traffic is concentrated in fewer railroads and even though rates have declined for most shippers since the enactment of the Staggers Rail Act, some shippers are paying significantly higher rates than other shippers—a reflection of differential pricing. There is significant disagreement on the state of competition in the rail industry. In 1976, there were 63 Class I railroads operating in the United States compared with 7 Class I railroads in 2004. As figure 3 shows, 4 of these Class I railroads accounted for over 89 percent of the industry’s revenues in 2004. While some experts view this concentration as a sign that the industry has become less competitive over time, others believe that the railroad mergers and acquisitions actually increased competition in the rail industry because STB placed conditions on the mergers intended to maintain competition. These experts also point to the hundreds of short line railroads that have come into being since the enactment of the Staggers Rail Act, as well as other increased competitive options for shippers from other modes such as trucks and barges. According to our preliminary analysis, some commodities and shippers are paying significantly higher rates than other shippers. This can be seen in rates charged to commodities and at specific routes. Figure 4 compares commodity rates for coal and grain prices from 1985 through 2004 using our rail rate index. As figure 4 shows, all rate changes were below the rate of inflation and thus all rates declined in real terms. However during that period, coal rates dropped even more sharply than industrywide rates, declining 35 percent. Grain rates initially declined from 1985 to 1987, but then diverged from industry trends and increased, resulting in a net 9 percent nominal increase by 2004. It is difficult to precisely determine the number of shippers who are “captive” to one railroad because proxy measures that provide the best indication can overstate or understate captivity. One way of determining potential captivity in our preliminary analysis was to identify which Bureau of Economic Analysis (BEA) economic areas were served by only one Class I railroad. In 2004, 27 of the 177 BEA economic areas were served by only one Class I railroad. As shown in figure 5, these areas include parts of Montana, North Dakota, New Mexico, Maine, and other states. We also examined specific origin and destination pairs and found that in 2004, origin and destination routes with access to only one Class I railroad carried 12 percent of industry revenue. This represents a decline from 1994, when 22 percent of industry revenue moved on routes served by one Class I railroad. This decline suggests that more railroad traffic is traveling on routes with access to more than one Class I railroad. While examining BEA areas provides a proxy measure for captivity, a number of factors may understate or overstate whether shippers are actually captive. The first two of these factors may work to understate the extent of captivity among shippers. First, routes originating within economic areas served by multiple Class I railroads may still be captive if only one Class I railroad serves their destination, meaning the shipper can use only that one railroad for that particular route. Second, some BEA areas are quite large, so a shipper within the area may have access to only one railroad even though there are two or more railroads within the broader area. Two additional limitations may work to overstate the number of locations captive to one railroad. First, this analysis accounts for Class I railroads only and does not account for competitive rail options that might be offered by Class II or III railroads such as the Guilford Rail System, which operates in northern New England. Second, this analysis considers only competition among rail carriers and does not examine competition between rail and other transportation modes such as trucks and barges. To determine potential captivity during our preliminary analysis, we applied another proxy measure—the definition of potentially captive traffic used in the Staggers Rail Act. The act defines potentially captive traffic as any that pays over 180 percent of the revenue-to-variable cost (R/VC) ratio. As a percentage of all rail traffic, the amount of potentially captive traffic traveling over 180 percent R/VC and the revenue generated from that traffic have both declined since 1985. However, our preliminary analysis indicates the share of potentially captive shippers who are paying the highest rates—those substantially above the threshold for rate relief—has increased. While total tons have increased significantly (from about 1.37 billion in 1985 to about 2.14 billion in 2004), figure 6 shows that tons traveling between 180 and 300 percent R/VC but have remained fairly constant—an increase from about 497 million tons in 1985 to about 527 million tons in 2004. However tons traveling above 300 percent R/VC have more than doubled—from about 53 million tons in 1985 to over 130 million tons in 2004. This pattern can also be seen in the share of traffic traveling above and below 180 percent R/VC between 1985 and 2004. As figure 7 illustrates, the percent of all traffic traveling between 180 and 300 percent R/VC decreased from 36 percent in 1985 to 25 percent in 2004. In contrast, the percent of all traffic traveling above 300 percent R/VC increased from 4 percent in 1985 to 6 percent in 2004. Our preliminary analysis indicates that this overall change in traffic traveling over 300 percent R/VC can be seen in certain states and commodities. For example, 39 percent of grain originating in Montana and 20 percent of coal in West Virginia traveled over 300 percent R/VC in 2004. As shown in figure 8, this represents a significant increase from 1985, when 14 percent of grain in Montana and 4 percent of coal in West Virginia traveled over 300 percent R/VC. As with BEA areas, examining R/VC levels as a proxy measure for captivity can also understate or overstate captivity. For example, it is possible for the R/VC ratio to increase while the rate paid by a shipper is declining. Assume that in Year 1, a shipper is paying a rate of $20 and the railroad’s variable cost is $12. The R/VC ratio—a division of the rate and the variable cost—would be 167 percent. If in Year 2 the variable costs decline by $2.00 from $12 to $10, and the railroad passes this cost savings directly on the shipper in the form of a reduced rate, the shipper would pay $18 instead of $20. However, as shown in table 1, because both revenue and variable cost decline, the R/VC ratio increases to 180 percent. Although proxy measures have inherent limitations, they can serve as useful indicators of trends in railroad pricing, how the railroads may be exercising their market power to set rates, and where competition and captivity concerns remain. Whether these trends reflect an exercise or possible abuse of market power or is simply a reflection of rational economic practices by the railroads in an environment of excess demand remains uncertain. A number of alternative approaches have been suggested by shipper groups, economists, and other experts in the rail industry to address remaining concerns about competition and captivity—however, any alternative approaches should be carefully considered. Two areas—an assessment of competition and addressing problems with the rate relief process—are particularly integral to further improvement. Any alternative approaches to address competition and captivity should be carefully considered to ensure that the approach achieves the important balance set out in the Staggers Act of allowing the railroads to earn adequate revenues and invest in its infrastructure while assuring protection for captive shippers from unreasonable rates. Our preliminary work shows there has been little assessment by the federal government of where areas of inadequate competition might exist or how changes in industry concentration might be resulting in the inappropriate exercise of market power. Although the STB has broad legislative authority to investigate industry practices, it has generally limited its reviews of competition to merger cases. STB is responsible for reviewing railroad merger proposals, approving those that it finds consistent with the public interest, and ensuring that any potential merger- related harm to competition is mitigated. STB’s mitigation efforts have focused on preserving competition, such as granting the authority for one railroad to operate over the tracks of another railroad (called trackage rights). As we reported in 2001, STB found little competition-related harm during its oversight of recent mergers. However, rail mergers can have different effects on rail rates. For example, using an econometric approach that isolated the specific effects of the Union Pacific/Southern Pacific merger on rail rates for certain commodities in two geographic areas— Reno, Nevada, and Salt Lake City, Utah—we found that the merger reduced rates for four of six commodities, placed upward pressure on rates for one commodity, and left rates relatively unchanged for one commodity. In analyzing rail rates as part of merger oversight, STB examines the merger oversight record, which generally focuses on the overall direction and magnitude of rate changes, rather than specific commodities or geographic areas. According to STB officials, in general, the records have not permitted STB to reliably and precisely isolate the effects of mergers on rates from the effects of other factors (such as the price of diesel fuel). STB is not unaware of concerns about competition. In addition to reviewing competition in terms of mergers, STB has also instituted proceedings to review rail access and competition issues. For example, in April 1998, STB commenced a review at the request of Congress to review access and competition issues in the rail industry. In an April 1998 decision on these issues, STB agreed to consider revising its competitive access rules. However, in its December 1998 report to Congress, STB declined to take further action on this issue because it had adopted new rules allowing shippers temporary access to alternative routing options during periods of poor service. In addition, STB observed that the competitive access issue raises basic policy questions that are more appropriately resolved by Congress. Furthermore, in a December 1998 ruling on a Houston/Gulf Coast oversight proceeding, STB recognized the possibility that opening up access could fundamentally change the nation’s rail system, possibly benefiting some shippers with high-volume traffic while reducing investment elsewhere in the system and ultimately reducing or eliminating service for small, lower-volume shippers in rural areas. Finally, STB adopted new regulations for rail mergers in 2001. These new regulations require the applicant to demonstrate that the merger would enhance, not just preserve, competition. Given the disagreement about the adequacy of competition in the industry and the fact that proxy measures can understate or overstate captivity, an assessment of competition and how changes in industry concentration might be resulting in the inappropriate exercise of market power would allow decisionmakers to identify areas where competition is lacking and to assess the need for and merits of targeted approaches to address it. The targeted approaches most frequently proposed by shipper groups and others include reciprocal switching arrangements, which allow one railroad to switch railcars of another railroad, and terminal access agreements, which permits one railroad to use another’s terminals. We will discuss the potential costs and benefits of these approaches further in our final report. Use of these approaches should be carefully considered to ensure that the approach achieves the important goals set out in the Staggers Rail Act. For example, if these approaches expand competitive options and decrease the number of captive shippers, which could decrease the need for federal regulation and the need for a rate relief process. On the other hand these approaches could also reduce rail rates and thus railroad revenues and affect the ability of the railroads to earn adequate revenues and invest in its infrastructure. The principal vehicle through which shippers seek relief from unreasonable rates is the rate relief process. The Staggers Rail Act recognized that some shippers may not have access to competitive alternatives and may therefore be subject to unreasonably high rates. For these shippers, the act gave ICC, and later STB, the authority to establish a rate relief process so that shippers could obtain relief from unreasonably high rates, as well as more general powers to monitor the railroad industry. Under the standard rate relief process, the Board requires a shipper to demonstrate how much an optimally efficient railroad would need to charge that shipper. Therefore, the shipper must construct a hypothetical, perfectly efficient railroad that would replace its current carrier. There is widespread agreement the rate relief process is inaccessible to most shippers and does not provide expeditious handling and resolution of complaints. The process is expensive, time consuming and complex, and, as a result, several shipper’s organizations told us that it is unlikely they would ever file a rate case. Since 2001, only 10 cases have been filed, and these cases took between 2.6 and 3.6 years—an average of 3.3 years per case—to complete. In addition, while STB does not keep records of the cost of a rate case, shippers we interviewed agreed that the process can cost approximately $3 million per litigant. As a result, shippers told us that, for them to bring a case, the case would need to involve several million dollars so that it was worthwhile to spend $3 million on a case that they could possibly lose. The process is complex because the legal procedures requires that (1) the shipper construct a model of a hypothetical, perfectly efficient railroad and (2) the railroad and shipper have opportunities to present their facts and viewpoints as well to present new evidence. Congress and STB have recognized the problems with the rate relief process and taken actions to address them. First, Congress required STB to develop simplified guidelines. STB developed guidelines to streamline the process when the value of traffic at stake did not make it feasible to incur the costs of conducting a full rate case. Under these simplified guidelines, shippers do not have to construct a hypothetical railroad and can instead rely on industry averages to try to prove that their rate is unreasonable. Although these simplified guidelines have been in place since 1997, the process set out by the guidelines has not been used. Second, STB worked to improve the standard rate relief process. Specifically, STB now holds oral arguments to begin cases and, according to STB officials, these oral arguments help to clarify disagreements without adding any time to the process. In addition, STB has added staff to process cases. According to shippers and railroad officials we spoke with, the simplified guidelines are confusing regarding who is eligible to use the process and how it would work. In addition, several shippers’ organizations told us that shippers are concerned about using the simplified guidelines because since they have never been used, they believe it will be challenged in court and result in lengthy litigation. STB officials told us that they – not the shippers – would be responsible for defending the guidelines in court. STB officials also said that, if a shipper won a small rate case, STB could order reparations to the shipper before the case was appealed to the courts. During our preliminary work we identified a number of different approaches that have been suggested by shipper organizations and others that could make the rate relief process less expensive and more expeditious, and therefore potentially more accessible. Each of the proposed approaches has both advantages and drawbacks. These approaches included the following: Increased use of arbitration: Under arbitration, the two parties would present their case before an arbitrator, who would then determine the rate. This approach would replace the shipper’s requirement to create a hypothetical railroad. Proponents of this system argue that it provides both the railroads and the shippers with an incentive to suggest a reasonable rate (because otherwise the arbitrator could select the other’s offer) and that the threat of arbitration can induce the parties to resolve their own problems and limit the need for federal regulation. However, critics of this approach suggest that arbitration decisions may not be based on economic principles such as the revenue and cost structure of the railroad and that arbitrators may not be knowledgeable about the railroad industry. Increased use of simplified guidelines: The simplified guidelines use standard industry average figures for revenue data instead of requiring the shipper to create a hypothetical railroad. This approach would reduce the time and complexity of the process; however, it may not provide as accurate and precise a measure as the current process. However, as noted above, the use of STB’s simplified guidelines has not been fully reviewed by the courts, and many railroad industry experts believe the first use of the guidelines will result in lengthy litigation. Increased use of alternative cost approaches: For example, STB could use the long-run incremental cost approach to evaluate and decide rate cases. This process, which is used for regulating pipelines, bases rates on the actual incremental cost of moving a particular shipment, plus a reasonable rate of return. This approach allows for a quick, standard method for setting prices, but does not take into account the need for differential pricing or the railroad’s need to charge higher rates in order to become revenue adequate. Structuring rate regulation around actual costs can also create potential disincentives for the regulated entity to control its costs. Again, these alternative approaches should be carefully considered to ensure that the approach achieves the important balance set out in the Staggers Act. A significant factor in evaluating each of these alternatives is the revenue adequacy of the railroads. The Staggers Rail Act established revenue adequacy as a goal for the industry and allowed the railroads to use differential pricing to increase their revenues. The act further gave the ICC (and later STB) the authority to determine the revenue adequacy of the railroads each year. While the specific method for determining revenue adequacy has been controversial, the overall trend in revenue adequacy may be more important. In its last report in 2004, STB determined that one railroad is revenue adequate and that others are approaching revenue adequacy. While it is too early to determine that the industry as a whole is achieving revenue adequacy, this is a significant shift in the rail industry because for decades after enactment of the Staggers Rail Act, the railroads were all considered revenue inadequate. Different approaches to addressing remaining competition and captivity concerns will likely recognize to some degree the railroads’ continued need to more consistently recover their cost of capital and become revenue adequate. The railroads need additional revenue for infrastructure investment to keep pace with increased demand. On the other hand, different approaches also raise the question as to what degree the railroads should continue to rely on obtaining significantly higher prices from those with greater reliance on rail transportation in a revenue adequate environment where total railroad revenues are increasingly sufficient to meet the railroad’s investment needs. The demand for freight and freight rail is forecast to increase significantly in the future, although many factors can affect the accuracy of these forecasts. Freight markets are volatile and unpredictable and thus freight demand forecasts may prove to be off the mark. For example, much freight demand is determined by trade that originates outside the United States. Many of the data used to develop these freight demand forecasts are proprietary and a result, we could not assess the validity or reasonableness of the assumptions used to develop the predictions. However, forecasts of freight and freight rail demand are useful as one possible scenario of the future. As the Congressional Budget Office (CBO) observed in a January 2006 report, forecasts of future demand can be viewed as more illustrative than quantitatively accurate. Major freight railroads have reported that they expect to invest about $8 billion in infrastructure during 2006—a 21 percent increase over 2005— and have told us that they plan to continue making infrastructure investments. Although railroads are sufficiently profitable to be investing at record levels today, it is not certain whether in the future investments will keep pace with the projected demand. Railroads secure private benefits by investing in their infrastructure and have many considerations in making new infrastructure investments such as the need to obtain the highest return on their investment, optimize the performance of their network, and respond to other significant capital needs of rail operations. The railroads we interviewed were generally unwilling to discuss their future investment plans with us as this is business proprietary information. We are therefore unable to comment on how companies are likely to choose among their competing investment priorities for the future. In addition to securing private benefits for railroad networks, investments in rail projects can produce benefits for the public—some of these public benefits are, as CBO’s report pointed out, large in comparison to anticipated private railroad benefits. For example, shifting truck freight traffic to railroads can reduce highway congestion and reduce or avoid public expenditures that otherwise would be needed to build additional highway capacity or provide additional maintenance to accommodate growth in truck traffic. These and other public benefits can be realized at the national, state, and local levels. For example, rail investment may generate benefits to the national economy by lowering the costs of producing and distributing goods. Since rail uses less fuel than trucks, energy use and emissions may be reduced. In contrast, a rail project that eliminates or improves a highway-rail crossing could deliver primarily local public safety benefits by reducing accidents, time lost waiting for trains to pass, and pollution and noise from idling trains and lessening the risk of delays for emergency vehicles at crossings. In pursuit of these public gains, the federal and state governments have been increasingly participating in freight rail improvement projects. For example, the State of Delaware spent about $14 million to rehabilitate a bridge in exchange for receiving a fee for each railroad car that crosses the bridge. The federal government has also become more involved in freight rail partnerships. Specifically, in 1997 the U.S. Department of Transportation provided a $400 million loan to the Alameda Corridor Transportation Authority for the Alameda Corridor project, which included a number of rail and road improvements to consolidate freight traveling to and from the ports of Los Angeles and Long Beach. These ports are a significant gateway for freight that is imported from Asia and distributed throughout the U.S. In addition, in 2005, Congress provided $100 million to the Chicago CREATE project to improve the rail infrastructure and ease congestion in and around Chicago— the busiest freight rail center in the U.S. In the years ahead Congress is likely to face additional decisions regarding potential federal policy responses and the federal role in the nation’s freight railroad infrastructure. Based on our ongoing and past work, I would like to make three observations. First, any potential federal policy response should recognize that subsidies can potentially distort the performance of markets and that the federal fiscal environment is highly constrained. Second, any such response should occur in the context of a comprehensive National Freight Policy that reflects system performance based goals and a framework for intergovernmental and public-private cooperation. DOT initiated this effort by publishing a draft Framework for a National Freight Policy this year for comment. Third, federal involvement should only occur where demonstrable wide-ranging public benefits and a mechanism to appropriately allocate the cost of financing these benefits between the public and private sectors exists and, to the extent possible, focuses on benefits that are more national than local in scope. Although new freight rail investment tax credits have been suggested, our past work has pointed out that it is difficult to target this approach to desired activities and outcomes and ensure that it generates the desired new investments as opposed to subsidizing investment that would have been undertaken at some point anyway. This approach can also have problematic fiscal impacts because it either lowers tax revenues or leads to higher overall tax rates to offset revenue losses. We will be discussing these areas in greater detail when we issue our report. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions you or other Members of the Committee may have at this time. For questions regarding this testimony, please contact JayEtta Z. Hecker on (202) 512-2834 or [email protected]. Individuals making key contributions to this testimony include Ashley Alley, Steve Brown, Matthew T. Cail, Sheranda S. Campbell, Steve Cohen, Elizabeth Eisenstadt, Libby Halperin, Richard Jorgenson, Tom McCool, John Mingus, Josh H. Ormond, and John W. Shumann. Regulation: Changes in Freight Railroad Rates from 1997 through 2000. GAO-02-524. Washington, D.C.: June 7, 2002. Freight Railroad Regulation: Surface Transportation Board’s Oversight Could Benefit From Evidence Better Identifying How Mergers Affect Rates. GAO-01-689. Washington, D.C.: July 5, 2001. Railroad Regulation: Current Issues Associated With the Rate Relief Process. GAO/RCED-99-46. Washington, D.C.: April 29, 1999. Railroad Regulation: Changes in Railroad Rates and Service Quality Since 1990. GAO/RCED-99-93. Washington, D.C.: April 6, 1999. Railroad Competitiveness: Federal Laws and Policies Affect Railroad Competitiveness. GAO/RCED-92-16. Washington, D.C.: November 5, 1991. Railroad Regulation: Economic and Financial Impacts of the Staggers Rail Act of 1980. GAO/RCED-90-80. Washington, D.C.: May 16, 1990. Railroad Regulation: Shipper Experiences and Current Issues in ICC Regulation of Rail Rates. GAO/RCED-87-119. Washington, D.C.: September 9, 1987. Railroad Regulation: Competitive Access and Its Effects on Selected Railroads and Shippers. GAO/RCED-87-109. Washington, D.C.: June 18, 1987. Railroad Revenues: Analysis of Alternative Methods To Measure Revenue Adequacy. GAO/RCED-87-15BR. Washington, D.C.: October 2, 1986. Shipper Rail Rates: Interstate Commerce Commission’s Handling of Complaints. GAO/RCED-86-54FS. Washington, D.C.: January 30, 1986. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Staggers Rail Act of 1980 largely deregulated the freight railroad industry, giving the railroads freedom to price their services according to market conditions and encouraging greater reliance on competition to set rates. The act recognized the need for railroads to use demand-based differential pricing in the deregulated environment and to recover costs by setting higher rates for shippers with fewer transportation alternatives. The act also recognized that some shippers might not have access to competitive alternatives and might be subject to unreasonably high rates. It established a threshold for rate relief and granted the Interstate Commerce Commission and the Surface Transportation Board (STB) the authority to develop a rate relief process for those "captive" shippers. This testimony provides preliminary results on GAO's ongoing work and addresses (1) the changes that have occurred in the freight railroad industry since the enactment of the Staggers Rail Act, including changes in rail rates and competition in the industry, (2) the alternative approaches that have been proposed and could be considered to address remaining competition and captivity concerns, and (3) the projections for freight traffic demand over the next 15 to 25 years, the freight railroad industry's projected ability to meet that demand, and potential federal policy responses. To fulfill these objectives, GAO examined STB data, interviewed affected parties, and held an expert panel. The changes that have occurred in the railroad industry since the enactment of the Staggers Rail Act are widely viewed as positive. Railroad industry financial health improved substantially and rates generally declined between 1985 and 2000, but increased slightly from 2001 through 2004. Concerns about competition and captivity remain because traffic is concentrated in fewer railroads and some shippers are paying significantly higher rates than others. It is difficult to precisely determine the number of shippers that are "captive" because proxy measures can overstate or understate captivity. However, GAO's preliminary analysis indicates that while captivity may be dropping, the share of potentially captive shippers that are paying the highest rates--those substantially above the threshold for rate relief--has increased. A number of alternative approaches have been suggested by shipper groups and others to address remaining concerns about competition and captivity; however, any alternative approaches should be carefully considered. Two areas are particularly integral to further improvement. First, while STB has broad authority to investigate industry practices and has assessed competition--generally in railroad merger cases--there has been little assessment by any federal agency of the state of competition and of where specific areas of inadequate competition and the inappropriate exercise of market power might exist. Such an assessment would allow decisionmakers to identify areas where competition is lacking and to assess the need for and merits of targeted approaches to address this situation. These approaches include requiring reciprocal switching arrangements, which allow one railroad to switch railcars of another railroad, and/or terminal access agreements, which permit one railroad to use another's terminals. Second, a number of different approaches have been suggested that could make the rate relief process less expensive and more expeditious, and thus potentially more accessible, such as arbitration and increased use of simplified guidelines. Each of the proposed approaches has both advantages and drawbacks. Any alternative approach to address competition and captivity should be carefully considered to ensure that the approach will achieve the important balance set out in the Staggers Rail Act of allowing the railroads to earn adequate revenues while assuring protection for captive shippers from unreasonable rates. Significant increases in freight traffic over the next 15 to 25 years are forecasted, and the railroad industry's ability to meet future demand is largely uncertain. Investments in rail projects can produce benefits for the public--for example, shifting truck freight traffic to railroads can reduce highway congestion. As a result, the federal and state governments have been increasingly participating in freight rail improvement projects--for example, Congress provided $100 million to the CREATE project in 2005 to improve the rail network in Chicago. Congress is likely to face additional decisions in the years ahead regarding federal policy toward the nation's freight railroad system. GAO would note, based on past work, that federal involvement should occur only where demonstrable public benefits exist, and where a mechanism is in place to appropriately allocate the cost of financing these benefits between the private and public sectors, and between national, state, and local interests.
The Implementing Recommendations of the 9/11 Commission Act of 2007 (9/11 Commission Act) directed DHS to create a plan for sharing transportation security–related information among public and private entities that have a stake in protecting the nation’s transportation system, including passenger and freight rail. This plan—first issued in July 2008— is now called the Transportation Security Information Sharing Environment (TSISE). The TSISE describes, among other things, the information–sharing process. TSA disseminates security information through several information products, including reports, assessments, and briefings, among others. These products are distributed through mechanisms including the Homeland Security Information Network and mechanisms sponsored by industry, such as the Association of American Railroads’ Railway Alert Network, among others. TSA is also specifically responsible for receiving, assessing, and distributing intelligence information related to potential threats and significant security concerns (rail security incidents) related to the nation’s rail system. Specifically, in 2008, TSA issued a regulation requiring U.S. rail systems to report all rail security incidents to TSA’s Transportation Security Operations Center (TSOC), among other things. The TSOC is an operations center open 24 hours a day, 7 days a week, that serves as TSA’s main point of contact for monitoring security–related incidents or crises in all modes of transportation. The regulation also authorizes TSA officials to view, inspect, and copy rail agencies’ records as necessary to enforce the rail security incident reporting requirements. This regulation is supported by TSA policies and guidance, including the Transportation Security Inspector Inspections Handbook, the National Investigations and Enforcement Manual, and the Compliance Work Plan for Transportation Security Inspectors. TSA’s regulation is intended to provide the agency with essential information on rail security incidents so that TSA can conduct comprehensive intelligence analysis, threat assessment, and allocation of security resources, among other things. According to the regulation, potential threats and significant security concerns that must be reported to the TSOC include bomb threats, suspicious items, or indications of tampering with rail cars, among others. Within TSA, different offices are responsible for sharing transportation security–related information and for implementing and enforcing the rail security incident reporting requirement. For instance, TSA’s Office of Security Policy and Industry Engagement (OSPIE) is the primary point of contact for sharing information with private sector stakeholders, and is responsible for using incident reports and analyses, among other things, to develop strategies, policies, and programs for rail security, including operational security activities, training exercises, public awareness, and technology. TSA’s Office of Intelligence and Analysis (OIA) receives intelligence information regarding threats to transportation and designs intelligence products intended for officials in TSA, other parts of the federal government, state and local officials, and industry officials, including rail agency security coordinators and law enforcement officials. The TSOC, managed by TSA’s Office of Law Enforcement/Federal Air Marshal Service, is the TSA entity primarily responsible for collecting and disseminating information about rail security incidents. Once notified of a rail security incident, TSOC officials are responsible for inputting the incident information into their incident management database known as WebEOC, and for disseminating incident reports that they deem high priority or significant to selected TSA officials; other federal, state, and local government officials; and selected rail agencies’ law enforcement officials. Figure 1 shows the intended steps and responsibilities of TSA components involved in the rail security incident reporting process. TSA’s Office of Security Operations (OSO) is responsible for overseeing and enforcing the incident reporting requirement. Responsible for managing TSA’s inspection program for the aviation and surface modes of transportation, the Office of Security Operations’ Surface Compliance Branch deploys approximately 270 transportation security inspectors– surface (TSI-S) nationwide. The TSI-Ss are responsible for, among other things, providing clarification to rail agencies regarding the incident reporting process and for overseeing rail agencies’ compliance with the reporting requirement by conducting inspections to ensure that incidents were properly reported to the TSOC. Six regional security inspectors– surface (RSI-S) within the Compliance Programs Division are responsible for providing national oversight of local surface inspection, assessment, and operational activities. In June 2014, we found that TSA had some mechanisms in place to collect stakeholder feedback on the products it disseminates containing security-related information and had initiated efforts to improve how it obtains customer feedback, but had not developed a systematic process for collecting and integrating such feedback. Specifically, in February 2014, TSA reconvened its Information Sharing Integrated Project Team (IPT), whose charter included, among other things, milestones and time frames for developing a centralized management framework to capture stakeholder satisfaction survey data on all of TSA’s security-related products and the systems used to distribute these products. However, at the time of our June 2014 report, the IPT Charter did not specify how TSA planned to systematically collect, document, and incorporate informal feedback—a key mechanism used by the majority of the stakeholders we surveyed, and a mechanism TSA officials told us they utilize to improve information sharing. For instance, the rail industry provided TSA with a list of areas for emphasis in intelligence analysis in December 2012, and TSA subsequently initiated a product line focusing on indications and warnings associated with disrupted or successful terrorist attacks. TSA officials stated that they further refined one of the products as a result of a stakeholder requesting information on tactics used in foreign rail attacks. In 2013, one TSA component built a system to track informal information sharing with stakeholders at meetings and conferences, and through e-mail, but TSA officials stated that the data were not used for operational purposes, and TSA had no plans to incorporate this system into its centralized management framework because the IPT had decided to focus its initial efforts on developing a survey mechanism. According to our June 2014 survey results, surface transportation stakeholders were generally satisfied with TSA’s security-related products and the mechanisms used to disseminate them. In particular, 63 percent of rail stakeholders (70 of 111) reported that they were satisfied with the products they received in 2013, and 54 percent (59 of 110) reported that they were satisfied with security-related information sharing mechanisms. However, because TSA lacked specific plans and documentation related to improving its efforts to incorporate all of its stakeholder feedback, it was unclear how, or if, TSA planned to use stakeholder feedback to improve information sharing. As a result of these findings, we recommended that TSA include in its planned customer feedback framework a systematic process to document informal feedback, and how it incorporates all of the feedback TSA receives, both formal and informal. TSA concurred, and in response, by April 2015, had taken actions to develop these processes. Specifically, TSA developed a standard operating procedure to organize how its offices solicit, receive, respond to, and document both formal and informal customer feedback on its information-sharing efforts, which delineates a systematic process for doing so. TSA also developed a TSA-wide standard survey for its offices to use to obtain formal and informal feedback on specific products, and created an information-sharing e-mail inbox to which all survey responses will be sent, evaluated, and distributed to the appropriate office for action. We have not evaluated these actions, but if implemented effectively, we believe that TSA will now be better positioned to meet stakeholder needs for security-related information. In December 2012, we found TSA had made limited use of the rail security incident information it had collected from rail agencies, in part because it did not have a systematic process for conducting trend analysis. TSA’s stated purpose for collecting rail security incident information was to allow TSA to “connect the dots” by conducting trend analysis that could help TSA and rail agencies develop targeted security measures. However, the incident information provided to rail agencies by TSA was generally limited to descriptions of specific incidents with minimal accompanying analysis. As a result, officials from passenger rail agencies we spoke with generally found little value in TSA’s incident reporting process, because it was unclear to them how, if at all, the information was being used by TSA to identify trends or threats that could help TSA and rail agencies develop appropriate security measures. However, as we reported in December 2012, opportunities for more sophisticated trend analysis existed. For example, the freight industry, through the Railway Alert Network—which is managed by the Association of American Railroads, a rail industry group—identified a trend where individuals were reportedly impersonating federal officials. In coordination with TSA, the Railway Alert Network subsequently issued guidance to its member organizations designed to increase awareness of this trend among freight rail employees and provide descriptive information on steps to take in response. The Railway Alert Network identified this trend through analysis of incident reporting from multiple freight railroads. In each case, the incident had been reported by a railroad employee and was contained in TSA’s incident management system, WebEOC. On the basis of these findings, in December 2012, we recommended that TSA establish a systematic process for regularly conducting trend analysis of the rail security incident data, in an effort to identify potential security trends that could help the agency anticipate or prevent an attack against passenger rail and develop recommended security measures. TSA concurred with this recommendation and by August 2013 had developed a new capability for identifying trends in the rail security incident data, known as the Surface Compliance Trend Analysis Network (SCAN). SCAN is designed to identify linkages between incidents captured in various sources of data, assemble detailed information about these incidents, and accurately analyze the data to enhance the agency’s ability to detect impending threats. According to TSA officials, SCAN consists of three elements: two OSO surface detailees located at TSOC, enhanced IT capabilities, and a new rail security incident analysis product for stakeholders. According to TSA, one of the key functions of the surface detailees is to continuously look for trends and patterns in the rail security incident data that are reported to TSOC, and to coordinate with OSPIE and OIA to conduct further investigations into potential trends. As I will discuss later in this statement, TSA has also made improvements to WebEOC, including steps to improve the completeness and accuracy of the data and the ability to produce basic summary reports, which we believe should facilitate this type of continuous trend analysis. TSA generates a Trend Analysis Report for any potential security trends the surface detailees identify from the rail security incident data. The Trend Analysis Report integrates incident information from WebEOC with information from multiple other sources, including TSA’s compliance database and media reports, and provides rail agencies and other stakeholders with analysis of possible security issues that could affect operations as a result of these trends. According to TSA officials, since SCAN was established, approximately 13 Trend Analysis Reports have been produced and disseminated to local TSA inspection officials and rail agencies. Although we have not assessed the effectiveness of these efforts to better utilize rail security information, we believe these actions address the intent of our recommendation. Further, if implemented effectively, they should better position TSA to provide valuable analysis on rail security incidents and to develop recommended security measures for rail agencies, as appropriate. In December 2012, we found that TSA had not provided consistent oversight of the implementation of the rail security reporting requirement, which led to considerable variation in the types and number of passenger rail security incidents reported. Specifically, we found that TSA headquarters had not provided guidance to local TSA inspection officials, the primary TSA points of contact for rail agencies, about the types of rail security incidents that must be reported, a fact that contributed to inconsistent interpretation of the regulation by local TSA inspection officials. While some variation was expected in the number of rail security incidents that rail agencies reported because of differences in agency size, geographic location, and ridership, passenger rail agencies we spoke with at the time reported receiving inconsistent feedback from their local TSA officials regarding certain types of incidents, such as those involving weapons. As a result, we found that, for 7 of the 19 passenger rail agencies included in our review, the number of incidents reported per million riders ranged from 0.25 to 23.15. This variation we identified was compounded by inconsistencies in compliance inspections and enforcement actions, in part because of limited utilization of oversight mechanisms at the headquarters level. For example, in December 2012, we found that TSA established the RSI-S position as a primary oversight mechanism at the headquarters level for monitoring rail security compliance inspections and enforcement actions to help ensure consistency across field offices. However, at the time of our report, the RSI-S was not part of the formal inspection process and had no authority to ensure that inspections were conducted consistently. We also found that the RSI-S had limited visibility over when and where inspections were completed or enforcement actions were taken because TSA lacked a process to systematically provide the RSI-S with this information during the course of normal operations. As a result, our analysis of inspection data from January 1, 2011, through June 30, 2012, showed that average monthly inspections for the 19 rail agencies in our review ranged from about eight inspections to no inspections, and there was variation in the regularity with which inspections occurred. We also found that TSA inconsistently applied enforcement actions against passenger rail agencies for not complying with the reporting requirement. For example, TSA took enforcement action against an agency for not reporting an incident involving a knife, but did not take action against another agency for not reporting similar incidents, despite having been inspected. On the basis of these findings, in December 2012, we recommended that TSA: (1) develop and disseminate written guidance for local TSA inspection officials and rail agencies that clarifies the types of incidents that should be reported to the TSOC and (2) enhance and utilize existing oversight mechanisms at the headquarters level, as intended, to provide management oversight of local compliance inspections and enforcement actions. TSA concurred with both of these recommendations and has taken actions to implement them. Specifically, in September 2013, TSA disseminated written guidance to local TSA inspection officials and passenger and freight rail agencies that provides clarification about the requirements of the rail security incident reporting process. This guidance includes examples and descriptions of the types of incidents that should be reported under the regulatory criteria, as well as details about the type of information that should be included in the incident report provided to the TSOC. Further, as of August 2013, TSA had established an RSI- dashboard report that provides weekly, monthly, and quarterly information about the number of inspection reports that have been reviewed, accepted, and rejected. According to TSA officials, this helps ensure that rail agencies are inspected regularly, by providing the RSI-Ss with greater insight into inspection activities. TSA has also enhanced the utilization of the RSI-Ss by providing them with the ability to review both passenger and freight rail inspections before the inspection reports are finalized and enforcement action is taken. According to TSA officials, this allows the RSI-Ss to ensure that enforcement actions are applied consistently by local TSA inspection officials. TSA also developed a mechanism for tracking the recommendations RSI-Ss make to local TSA inspection officials regarding changes to local compliance inspections, as well as any actions that are taken in response. Collectively, we believe that these changes should allow the RSI-Ss to provide better management oversight of passenger and freight rail regulatory inspections and enforcement actions, though we have not assessed whether they have done so. We also believe these actions, if implemented effectively, will help ensure that the rail security incident reporting process is consistently implemented and enforced, and will address the intent of our recommendations. In December 2012, we also found that TSA’s incident management data system, known as WebEOC, had incomplete information, was prone to data entry errors, and had other limitations that inhibited TSA’s ability to search and extract basic information. These weaknesses in WebEOC hindered TSA’s ability to use rail security incident data to identify security trends or potential threats. Specifically, at the time of our 2012 report, TSA did not have an established process for ensuring that WebEOC was updated to include information about rail security incidents that had not been properly reported to the TSOC. As a result, of the 18 findings of noncompliance we reviewed that were a result of failure to report an incident, 13 were never entered into WebEOC, and consequently could not be used by TSA to identify potential security trends. In addition, in December 2012, we found that TSA’s guidance for officials responsible for entering incident data was insufficient, a fact that may have contributed to data entry errors in key fields, including the incident type and the mode of transportation (such as mass transit or freight rail). At the time of our report, because of data errors and technical limitations in WebEOC, TSA also could not provide us with basic summary information about the rail security incident data contained in WebEOC, such as the number of incidents reported by incident type (e.g., suspicious item or bomb threat), by a particular rail agency, or the total number of rail security incidents that have been reported to the TSOC. Without the ability to identify this information on the number of incidents by type or the total number of incidents, we concluded that TSA faced challenges determining if patterns or trends exist in the data, as the reporting system was intended to do. On the basis of these findings, in December 2012 we recommended that TSA (1) establish a process for updating WebEOC when incidents that had not previously been reported are discovered through compliance activities, and (2) develop guidance for TSOC officials that includes definitions of data entry options to reduce errors resulting from data entry problems. TSA concurred with both of these recommendations and has taken actions to implement them. Specifically, in March 2013, TSA established a process for the surface detailee position, discussed earlier in this statement, to update WebEOC when previously unreported incidents are discovered through compliance activities. Additionally, in October 2014, TSA officials reported they have updated the guidance used by TSOC officials responsible for entering incident data into WebEOC to include definitions of incident types. TSA has also made changes to WebEOC that will allow for officials to search for basic information, such as the total number of certain types of incidents, required to facilitate analysis. We have not reevaluated the data contained in WebEOC, but we believe that the changes TSA has made should allow the agency to conduct continuous analysis of the rail security incident data to identify potential trends. We believe these actions address the intent of our recommendations and, if implemented effectively, should improve the accuracy and completeness of the incident data in WebEOC. This should provide TSA with a more comprehensive picture of security incidents as well as allow it to better identify any trends or patterns. Chairmen Katko and King, Ranking Members Rice and Higgins, and members of the subcommittees this concludes my prepared statement. I would be happy to respond to any questions you may have at this time. For questions about this statement, please contact Jennifer Grover at (202) 512-7141 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this statement include Chris Ferencik (Assistant Director), Michele Fejfar, Paul Hobart, Adam Hoffman, Tracey King, Elizabeth Kowalewski, Brendan Kretzschmar, Kelly Rubin, and Christopher Yun. Key contributors to the previous work that this testimony is based on are listed in those reports. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The U.S. surface transportation system's size and importance to the country's safety, security, and economic well-being make it an attractive target for terrorists. Within the federal government, TSA–a component of the Department of Homeland Security–is the primary federal agency responsible for overseeing and enhancing the security of the surface transportation system. A key component of this responsibility is ensuring that security-related information is collected, analyzed, and shared effectively across all modes, including rail. In 2008, TSA issued a regulation requiring U.S. passenger rail agencies to report all potential threats and significant security concerns to TSA, among other things. This testimony addresses the extent to which TSA has (1) developed systematic processes for integrating stakeholder feedback about security-related information it provides and analyzing trends in reported rail security incidents and (2) ensured consistent implementation of rail security incident reporting requirements. This statement is based on related GAO reports issued in June 2014 and December 2012, including selected updates on TSA's efforts to implement GAO's prior recommendations related to rail security and information sharing. For the selected updates, GAO reviewed related documentation, including tools TSA developed to provide oversight. GAO also interviewed TSA officials. In June 2014, GAO found that the Transportation Security Administration (TSA) did not have a systematic process for incorporating stakeholder feedback to improve security-related information sharing and recommended that TSA systematically document and incorporate stakeholder feedback. TSA concurred with this recommendation and, in April 2015, TSA developed a standard operating procedure to help ensure proper evaluation and consideration of all feedback TSA receives. In December 2012, GAO found TSA had made limited use of the rail security incident information it had collected from rail agencies, in part because it did not have a systematic process for conducting trend analysis. TSA's purpose for collecting this information was to allow TSA to "connect the dots" through trend analysis. However, the incident information provided to rail agencies by TSA was generally limited to descriptions of specific incidents. As a result, officials from passenger rail agencies GAO spoke with reported that they generally found little value in TSA's incident reporting requirement. On the basis of these findings, GAO recommended that TSA establish a systematic process for regularly conducting trend analysis of the rail security incident data. Although GAO has not assessed the effectiveness of TSA's efforts, by August 2013, TSA had developed a new analysis capability that, among other things, produces Trend Analysis Reports from the incident data. In December 2012, GAO found that TSA had not provided consistent oversight of its rail security reporting requirement, which led to variation in the types and number of passenger rail security incidents reported. Specifically, GAO found that TSA headquarters had not provided guidance to local TSA inspection officials, the primary TSA points of contact for rail agencies, about the types of rail security incidents that must be reported, which contributed to inconsistent interpretation of the regulation. The variation in reporting was compounded by inconsistencies in compliance inspections and enforcement actions, in part because of limited utilization of oversight mechanisms at the headquarters level. GAO also found that TSA's incident management data system, WebEOC, had incomplete information, was prone to data entry errors, and had other limitations that inhibited TSA's ability to search and extract basic information. On the basis of these findings, GAO recommended that TSA (1) develop and disseminate written guidance on the types of incidents that should be reported, (2) enhance existing oversight mechanisms for compliance inspections and enforcement actions, (3) establish a process for updating WebEOC with previously unreported incidents, and (4) develop guidance to reduce data entry errors. TSA concurred with these recommendations and has taken actions to implement them. Specifically, in September 2013, TSA disseminated written guidance to local TSA inspection officials and passenger and freight rail agencies that provides clarification about the rail security incident reporting requirement. In August 2013, TSA enhanced existing oversight mechanisms by creating an inspection review mechanism, among other things. TSA also established a process for updating WebEOC in March 2013, and in October 2014, officials reported that they have updated the guidance used by officials responsible for entering incident data to reduce data entry errors associated with incident types. Although GAO has not assessed the effectiveness of these efforts, they address the intent of the recommendations. GAO is making no new recommendations in this statement.
The Chesapeake Bay is the largest of the nation’s estuaries, measuring nearly 200 miles long and 35 miles wide at its widest point and, with its tributaries, the bay covers more than 4,500 square miles. However, the bay is relatively shallow, averaging only 21 feet deep. Roughly half of the bay’s water comes from the Atlantic Ocean, and the other half is freshwater that drains from the land and enters the bay through its many rivers and streams in the watershed basin. The Susquehanna River, which flows through Maryland, New York, and Pennsylvania, provides about 50 percent of the freshwater that enters the bay. As shown in figure 1, the bay’s watershed covers 64,000 square miles and spans parts of six states—Delaware, Maryland, New York, Pennsylvania, Virginia, and West Virginia—and the District of Columbia. The Chesapeake Bay is also biologically diverse, providing habitat for a wide variety of fish, shellfish, other animals, and plants. Blue crab, ducks, herring, oysters, shad, and striped bass are just some of the resources that live in or on the bay. Over time, the bay’s ecosystem has deteriorated. The bay’s “dead zones”— where too little oxygen is available to support fish and shellfish—have increased, and many species of fish and shellfish have experienced major declines in population. The deterioration has occurred primarily because of excess amounts of nutrients entering the bay, which damage species and plant populations; the single largest source of these pollutants is agricultural runoff. Overharvesting key species, such as oysters and crabs, has also contributed to the deterioration of the ecosystem. In addition, population growth and development have further stressed the ecosystem. For example, in the past decade, the amount of land in the watershed covered by impervious surfaces—surfaces through which water cannot flow—increased by about 41 percent, increasing the amount of polluted runoff that enters into streams and rivers and eventually runs into the bay. With a very high land-to-water ratio, the bay is particularly sensitive to activities on land. Figure 2 shows some of the land activities that contribute to pollution in the bay’s ecosystem. The decline in the bay’s living resources has been cause for a great deal of public and political attention. Efforts to manage the bay’s ecosystem and protect its living resources began as early as the 1930s and have continued through the present. In 1980, Maryland and Virginia, later joined by Pennsylvania, established the Chesapeake Bay Commission to serve as an advisory body on the Chesapeake Bay to their state legislatures and as a liaison to Congress. On December 9, 1983, the Governors of Maryland and Virginia; the Lieutenant Governor of Pennsylvania; the Mayor of the District of Columbia; the Administrator of EPA; and the Chair of the Chesapeake Bay Commission signed the first Chesapeake Bay agreement. Their agreement resulted in the Chesapeake Bay Program, a partnership that directs and conducts the restoration of the bay. The signatories to the agreement reaffirmed their commitment to restore the bay in 1987 and again in 1992. They signed the most current agreement, Chesapeake 2000, on June 28, 2000. Chesapeake 2000 envisions an ecosystem with abundant, diverse populations of living resources fed by healthy streams and rivers that sustain strong local and regional economies and a unique quality of life. The agreement has served as the Bay Program’s strategic plan, and it outlines five broad goals and 102 commitments for the restoration effort. Appendix II lists the goals and commitments outlined in Chesapeake 2000. The Bay Program, led by the Chesapeake Executive Council, has many partners, including federal agencies, states, academic institutions, and others (see app. III for a list of partners). While the Chesapeake Bay Program is a voluntary partnership among the states and the federal government, some activities of the Chesapeake Bay Program are implemented to meet the requirements of federal or state law. For example, the responsibility to establish water quality standards is both a commitment under the Chesapeake 2000 agreement and a requirement under the federal Clean Water Act. The Bay Program has seven committees and eight subcommittees, which form the organizational and planning structure for the restoration effort. In addition, the subcommittees have many work groups that plan and implement various aspects of the restoration effort. The organizational structure of the Bay Program is shown in figure 3. As the only federal signatory to the Chesapeake Bay agreements, EPA is responsible for spearheading the federal effort within the Bay Program through its Chesapeake Bay Program Office. Amendments to the Clean Water Act direct the Chesapeake Bay Program Office to provide support to the Chesapeake Executive Council. Specifically, the Chesapeake Bay Program Office is to, among other things, develop and make available information about the environmental quality and living resources of the Chesapeake Bay ecosystem; in cooperation with appropriate federal, state, and local authorities, help the signatories to the Chesapeake Bay agreement develop and implement specific plans to carry out their responsibilities; and coordinate EPA’s actions with those of other appropriate entities to develop strategies to improve the water quality and living resources in the Chesapeake Bay ecosystem. In addition, the Administrator of EPA, in coordination with other members of the Chesapeake Executive Council, must ensure that management plans are developed and that the signatories implement the plans to achieve and maintain, among other things, (1) the nutrient goals for the quantity of nitrogen and phosphorus entering the Chesapeake Bay and its watershed and (2) the water quality requirements necessary to restore living resources in the Chesapeake Bay ecosystem. The amendments to the Clean Water Act also directed the Administrator of EPA to submit a report to Congress every 5 years on the condition of the bay’s ecosystem. Although the Bay Program has established 101 measures, it has not yet developed an integrated approach that would allow it to translate these individual measures into an assessment of overall progress toward achieving the five broad restoration goals outlined in Chesapeake 2000. Instead, the Bay Program’s measures either assess progress toward achieving the restoration commitments that are quantifiable or provide information for making management decisions. The Bay Program has recognized that it may need an integrated approach to assess the overall progress of the restoration effort and established a task team to undertake this effort. The Bay Program has established 101 measures, of which 46 are appropriate for assessing progress made in achieving 18 of the 21 quantifiable commitments contained in Chesapeake 2000. The number of measures associated with each of these commitments varies; the more complex the assessment the more measures the Bay Program has developed and uses to assess progress. For example, assessing progress toward the commitment of correcting the nutrient- and sediment-related problems in the Chesapeake Bay and its tidal tributaries by 2010 under the Water Quality Protection and Restoration goal is complex, requiring the measurement of several pollutants and various aspects of water quality. The Bay Program uses 17 measures to assess progress for this commitment. In contrast, it is less complex to assess the commitment under the Sound Land Use goal to, by 2010, expand by 30 percent the system of public access points to the bay, its tributaries, and related resource sites in an environmentally sensitive manner. For this commitment, the Bay Program uses only one measure to track the number of new and enhanced public access sites within the Chesapeake Bay watershed. According to the Chesapeake Bay Program Office, because no other restoration effort had developed measures that they could use, the program had to develop nearly all of the underlying science and methodologies for their measures. In addition, to ensure the appropriateness of these measures, the Chesapeake Bay Program Office requires a rigorous review of all of the measures before they are adopted. For the most part, our expert panel agreed that the Bay Program has established appropriate measures to assess specific aspects of the restoration effort. Several members of the Bay Program’s Scientific and Technical Advisory Committee echoed this view. The remaining three quantifiable commitments, for which the Bay Program has not yet established any measures, include the following: By 2010, establish a goal of implementing plans to preserve key wetlands while addressing surrounding land use in 25 percent of the land area of each state’s bay watershed. By 2010, the District of Columbia, working with its watershed partners, will reduce pollution loads to the Anacostia River in order to eliminate public health concerns and achieve the living resource, water quality, and habitat goals of Chesapeake 2000 and past agreements. By 2003, develop partnerships with at least 30 interpretive sites to enhance their presentation of bay-related themes. The Bay Program has also developed 55 other measures to provide information it needs to make management decisions. For example, under the Water Quality Protection and Restoration goal, the Bay Program has made a commitment to assess the effects of airborne nitrogen compounds and chemical contaminants in the bay ecosystem and to help establish reduction goals for these contaminants. To help inform decision making for this commitment, the Bay Program has a measure for estimated vehicle emissions compared with vehicle miles traveled. In addition, for the commitment under the Living Resource Protection and Restoration goal to restore fish passage to more than 1,357 miles of river, the Bay Program has two measures that provide information about fish population levels. The Bay Program also uses three measures—the number of residents in the Chesapeake Bay watershed, the relationship between this population and the amount of municipal wastewater flow, and the volume of river water flowing into the Chesapeake Bay—to track general information about the Chesapeake Bay watershed. While the Bay Program has established measures to assess progress made in meeting some of the individual commitments of Chesapeake 2000, it has not developed an approach that can be used to assess progress toward achieving the five broad restoration goals. For example, the Bay Program has measures for determining trends in individual fish and shellfish populations, such as crabs, oysters, and rockfish, but it has not yet devised a way to integrate those measures to assess the overall progress made in achieving its Living Resource Protection and Restoration goal; the acres of bay grasses in the bay, the acres of wetlands restored, and the miles of forest buffers restored, but it has not developed an approach for integrating those measures to assess the overall progress made in achieving its Vital Habitat Protection and Restoration goal; and attributes of water quality—such as levels of dissolved oxygen, water clarity, and chlorophyll a—but has not developed an approach for combining these measures to determine progress toward achieving its goal of Water Quality Protection and Restoration. According to our expert panel, in a complex ecosystem restoration project like the Chesapeake Bay, overall progress should be assessed by using an integrated approach. This approach should combine measures that provide information on individual species or pollutants into a few broader scale measures that can be used to assess key ecosystem attributes, such as biological conditions. One such framework was developed in 2002 by EPA’s Science Advisory Board and can serve as a tool to assist Bay Program officials in deciding what ecological attributes to measure and how to aggregate measurements into an understandable picture of ecological integrity. In developing such an approach, the Bay Program also faces the challenge of finding a way to incorporate the results achieved in implementing the 81 nonquantifiable commitments contained in Chesapeake 2000 with the results achieved in implementing the 21 quantifiable commitments. For example, under the Water Quality Protection and Restoration goal, the Bay Program has a nonquantifiable commitment to reduce the potential risk of pesticides flowing into the bay by educating watershed residents on best management practices for pesticide use. Not only does the Bay Program currently have no method for measuring the progress made on this commitment, but it also has no approach for integrating these results with the results of the other 19 commitments listed under the water quality goal. Consequently, the program cannot currently assess the progress made in meeting the water quality goal. According to an official from the Chesapeake Bay Program Office, it is difficult to assess progress made in restoring an ecosystem that is as scientifically complex as the bay. The official also noted that the partners have discussed the need for an integrated approach over the past several years but have disagreed on whether the Bay Program could develop an approach that is scientifically defensible, given their limited resources. Recently, however, the partners are more optimistic that an integrated approach can be developed that will provide a clearer sense of the overall health of the bay, as well as restoration progress. In November 2004, a Bay Program task force began an effort to develop, among other things, a framework for organizing the Bay Program’s measures and proposed a structure for how the redesign work would be accomplished by the Bay Program’s subcommittees. The Bay Program’s Implementation Committee adopted this framework in April 2005. In July 2005, the Bay Program’s Monitoring and Analysis Subcommittee created a work group to head this effort. The Bay Program plans to have an initial integrated approach developed by January 2006. Mirroring the shortcomings in the program’s measures, the Bay Program’s primary mechanism for reporting on the health status of the bay—the State of the Chesapeake Bay report—does not provide an effective or credible assessment on the bay’s current health status. This is because these reports (1) focus on individual species and pollutants instead of providing an overall assessment of the bay’s health, (2) commingle data on the bay’s health attributes with program actions, and (3) lack an independent review process. As a result, when these reports are issued, they do not provide information in a manner that would allow the public and stakeholders to easily determine how effective program activities have been in improving the health of the bay. The Bay Program has recognized that improvements in its current reporting approach are needed and is developing new reporting formats that it hopes will more clearly describe the bay’s current health and the status of the restoration effort. The State of the Chesapeake Bay report has been issued approximately every 2 to 4 years since 1984 and is intended to provide the citizens of the bay region with a snapshot of the bay’s health.The Bay Program included the 2002 report as part of its required report to Congress on the status of the bay in 2003. However, the State of the Chesapeake Bay report does not effectively communicate the current health status of the bay because instead of providing information on a core set of ecosystem characteristics it focuses on the status of individual species or pollutants. For example: The 2002 and 2004 State of the Chesapeake Bay reports provided data on oysters, crab, rockfish, and bay grasses, but the reports did not provide an overall assessment of the current status of living resources in the bay or the health of the bay. Instead, these data were reported for each species individually, with graphics showing current levels as well as trends over time. The 2004 State of the Chesapeake Bay report shows a graphic that depicts oyster harvest levels at historic lows, with a mostly decreasing trend over time, and a rockfish graphic that shows a generally increasing population trend over time. However, the report does not provide contextual information that states how these measures are interrelated or explain what the diverging trends mean about the overall health of the bay. The 2004 State of the Chesapeake Bay report shows water clarity and algae trends in the bay’s major tributaries. These data include some varying trends, but the report provides no context for how these trends relate to one another or what the data show, collectively, about the overall health of the bay. According to our expert panel, effective reports on the health of an ecosystem should contain information on key ecological attributes— derived from a broader set of indicators that portray ecosystem conditions. The State of the Chesapeake Bay report, however, does not provide such an overall assessment of the bay’s health. Instead, our expert panel noted that the Bay Program has many fine scale indicators that measure individual aspects within the ecosystem, such as the oyster population or nutrient concentrations. While the expert panel agreed that the 2004 report was visually pleasing, they thought that it lacked a clear, overall picture of the bay’s health. They noted that without an overall assessment of the bay’s health, the public would probably not be able to easily and accurately assess the current condition of the bay from the information reported. The credibility of the State of the Chesapeake Bay reports has been undermined by two key factors. First, the Bay Program has commingled data from three sources when reporting on the health of the bay. Specifically, the reports mix information on the bay’s health status with results from a predictive model and the results of specific management actions. The latter two results do little to inform readers about the current health status of the bay and tend to downplay the bay’s actual condition. Second, the Bay Program has not established an independent review process to ensure the objectivity and accuracy of its reports. According to our expert panel, establishing such a process would significantly improve the credibility of the Bay Program’s reports. The Bay Program uses the following three kinds of data when preparing the State of the Chesapeake Bay reports: Monitoring data describe the actual status of individual species or pollutants in the bay, such as the number of acres of bay grasses or the concentration of nutrients in the tributaries. Generally, these data tend to show a more negative picture of bay health. For example, monitoring data on the blue crab population show that this population is at risk, with below-average levels in all but 2 years since 1991. Similarly, water clarity, which is critical to the health of underwater grasses that provide important habitat for many bay animals, is degrading in 17 areas in the bay and its tributaries, improving in only 1 area, and unchanged in 22 areas. In addition, while trends in the number of acres of bay grasses and dissolved oxygen levels have held relatively constant, the rockfish population has generally increased. Data on management actions include information on the extent to which the Bay Program has met its management commitments, such as the number of wetland acres that have been restored and the miles of forest buffers that have been established. Generally, these data tend to be more positive. For example, the 2004 State of the Chesapeake Bay reported that the program is over half way toward meeting its commitment to restore 25,000 acres of wetlands by 2010. In addition, the miles of forest buffers restored have increased every year since 1996. These actions are important because they contribute to the bay’s health in the long term. However, they do not immediately affect the bay’s health and do not describe its current health condition. Results from the Bay Program’s predictive model provide estimates of the long-term effect that certain management actions may have in reducing nutrient and sediment loads in the bay. The results from the predictive model are estimates and also tend to depict a positive picture. For example, because the model results indicate that loadings of phosphorus, nitrogen, and sediment have all been reduced since 1985, the 2004 State of the Chesapeake Bay reported that phosphorus loading decreased from approximately 27 million pounds per year to less than 20 million pounds per year by 2002. These statements, however, are based on estimates from the model and are not based on actual monitoring data of phosphorus concentrations in the bay. While the modeling results provide important forecast data on future impacts of various management actions, these results, like the results of management actions, do not describe the actual health conditions of the bay. Even though only one of these three types of data describe actual health conditions in the bay, all three types of data are commingled in the Bay Program’s State of the Chesapeake Bay reports. For example, in the 2002 report, the Bay Program reported an increase in the number of river miles opened for migratory fish, which is the result of a management action; in the same section, it also reported a decrease in the oyster population, which is an important factor in determining the bay’s health. Similarly, on a two-page spread in the 2004 report, the Bay Program presented monitoring data on five health indicators and information on three management indicators; the report also includes model results indicating improvements in nitrogen loadings. We believe that by commingling the data in this manner, the Bay Program not only downplays the deteriorated condition of the bay but also confuses the reader by mixing information that is relevant with information that is irrelevant to understanding the current condition of the bay. Our expert panel agreed that a key attribute that influences the credibility of reports on ecosystem health is whether they contain relevant information. Our expert panel also noted that the Bay Program reports are overly oriented to reporting on the progress of the program’s management actions at the expense of communicating information on the health status of the bay. Similarly, while they agreed that models can provide useful information about the impact of management actions on the future state of an ecosystem, these results should not be used in a report on actual health conditions. Several Bay Program partners that we spoke with also noted that the reports tend to be unduly positive and have not effectively communicated the status of the bay’s health. They believe that the reports failed to clearly distinguish between information on health and progress made in implementing management initiatives. In addition, several partners told us that the use of the predictive model to report on the actual health of the bay is inappropriate because the model forecasts potential outcomes of management actions and does not represent the actual health conditions of the bay. The Bay Program recognizes that improvements in its current reporting approach are needed. The program is also developing new reporting formats that it believes will more clearly describe the bay’s current health and the status of the restoration effort. As part of this effort, the Bay Program plans to issue separate reports in January and March 2006, one that would focus on the results of management actions and the other on the bay’s health status. The Bay Program also believes that their current efforts to develop an integrated approach for assessing progress will contribute to their efforts to more effectively report on the bay’s health. The credibility of the State of the Chesapeake Bay reports is further impaired because the Bay Program does not have an independent review process to ensure that its reports are accurate and credible. The officials who manage and are responsible for the restoration effort also analyze, interpret, and report the data to the public. No process currently exists to involve any other organization or group in this process. For example, according to a member of the Bay Program’s Scientific and Technical Advisory Committee, this committee, which has responsibility for providing scientific and technical advice to the Chesapeake Bay Program, is not involved in developing the reports and is not part of the review process. Instead, the reports are developed by the Communications and Education Subcommittee using data provided by the Monitoring and Analysis Subcommittee. The reports are then reviewed by representatives from each of the signatory jurisdictions prior to publication. We believe this lack of independence in reporting has led to the Bay Program projecting a rosier view of the health of the bay than may have been warranted. According to representatives of two of the signatories to the agreement, the signatories find it advantageous to positively report on the bay’s health, because positive trends help sustain both political and public interest as well as support for the effort. Therefore, the Bay Program has an incentive to present the most positive picture to the public of the progress that has been made in restoring the bay’s health. Chesapeake Bay Program officials acknowledged that concerns have been expressed that past reports projected a rosier view than was warranted. The officials noted that they believe that the 2004 State of the Chesapeake Bay report is less positive and pointed out that the report states that the bay and its watershed are in peril. Our expert panelists believe that an independent review panel—to either review the bay’s health reports before issuance or to analyze and report on the health status independently of the Bay Program—would significantly improve the credibility of the program’s reports. Some program partners we interviewed also echoed the need for an independent review panel and stated that it would help improve the Bay Program’s reports. For example, according to one partner, an independent group with no vested interest in the outcome of the reports could improve credibility. An estimated $3.7 billion in direct funding was provided to restore the Chesapeake Bay from fiscal years 1995 through 2004.This funding was provided for such purposes as water quality protection and restoration, sound land use, vital habitat protection and restoration, living resource protection and restoration, and stewardship and community engagement. An additional $1.9 billion in indirect funding was also provided for activities that affect the restoration effort. These activities are conducted as part of broader agency efforts and/or would continue without the restoration effort. Eleven key federal agencies; the states of Maryland, Pennsylvania, and Virginia; and the District of Columbia provided almost $3.7 billion in direct funding from fiscal years 1995 through 2004 to restore the bay. As shown in figure 4, the states typically provided about 75 percent of the direct funding for restoration, and the funding has generally increased over the 10-year period. Federal agencies provided a total of approximately $972 million in direct funding, while the states and the District of Columbia provided approximately $2.7 billion in direct funding for the restoration effort over the 10-year period. Of the federal agencies, the Department of Defense’s U.S. Army Corps of Engineers provided the greatest amount of direct funding. Of the states, Maryland provided the greatest amount of direct funding—more than $1.8 billion—which is over $1.1 billion more than any other state. Table 1 shows the amount of direct funding these entities provided. The percentage of direct funding provided for each of the five goals in Chesapeake 2000 varies. The largest percentage of direct funding— approximately 47 percent—went to water quality protection and restoration. The smallest percentage of direct funding—about 4 percent— was provided for stewardship and community engagement. Figure 5 shows the percentage of direct funding provided for each of the goals. Ten of the key federal agencies, Pennsylvania, and the District of Columbia provided about $1.9 billion in additional funding from fiscal years 1995 through 2004 for activities that have an indirect impact on bay restoration. These activities are conducted as part of broader agency efforts and/or would continue without the restoration effort. For example, the Department of Agriculture’s Natural Resources Conservation Service provides funding for programs that assist farmers in implementing agricultural best management practices. This assistance is part of the agency’s nationwide efforts and would continue even if the bay restoration effort did not exist. Similarly, the majority of Pennsylvania’s funding is included in the total for indirect funding because, while the state’s restoration efforts are important for restoring the bay, such as reducing agricultural runoff, bay restoration is not the primary purpose of the funding. As with direct funding, indirect funding for the restoration effort has also generally increased over fiscal years 1995 through 2004. As shown in figure 6, federal agencies typically provided about half of the indirect funding for the restoration effort. Federal agencies provided approximately $935 million in indirect funding, while Pennsylvania and the District of Columbia provided approximately $991 million in indirect funding for the restoration effort over the 10-year period. Of the federal agencies, the Department of Agriculture provided the greatest amount of indirect funding, primarily through the Natural Resources Conservation Service. Of the states, Pennsylvania provided the greatest amount of indirect funding. Table 2 shows the amount of indirect funding these entities provided. The percentage of indirect funding provided for each of the five goals in Chesapeake 2000 varies. The largest percentage of indirect funding— approximately 44 percent—went to water quality protection and restoration. The smallest percentage of indirect funding—approximately 4 percent—went to living resource protection and restoration. Figure 7 shows the percentage of indirect funding that was provided for each of the five goals. Appendix V contains additional details on funds obligated for the restoration of the Chesapeake Bay from fiscal years 1995 through 2004. Although almost $3.7 billion in direct funding and more than $1.9 billion in indirect funding has been provided for activities to restore the Chesapeake Bay, estimates for the amount of funding needed to restore the bay far surpass these figures. A January 2003 Chesapeake Bay Commission report estimated that the restoration effort faced a funding gap of nearly $13 billion to achieve the goals outlined in Chesapeake 2000 by 2010. In addition, the report found that the Water Quality Protection and Restoration goal faced the largest funding gap. Subsequently, in an October 2004 report to the Chesapeake Executive Council, the Chesapeake Bay Watershed Blue Ribbon Finance Panel estimated that the restoration effort is grossly underfunded.The finance panel found that the lack of adequate funding and implementation has left the bay effort far short of its goals and recommended that a regional financing authority be created with an initial capitalization of $15 billion of which $12 billion would come from the federal government. In addition to the funding provided for the restoration of the bay, EPA provided more than $1 billion to Maryland, Virginia, and Pennsylvania through its Clean Water State Revolving Fund program during fiscal years 1995 through 2004. The states use this funding, along with a required 20 percent match, to capitalize their state revolving funds. The funds provide low-cost loans or other financial assistance for a wide range of water quality infrastructure projects and other activities, such as implementing agricultural best management practices and urban storm water management. The District of Columbia, which is exempted from establishing a loan program, received more than $58 million from the program as grants for water quality projects during the same time period. Some of the projects funded may contribute to the bay’s restoration. For example, a $100 million loan was made to Arlington County, Virginia, in 2004 for upgrading a wastewater treatment facility to enhance nutrient removal. Although Chesapeake 2000 provides the overall vision and strategic goals for the restoration effort along with short- and long-term commitments, the Bay Program lacks a comprehensive, coordinated implementation strategy that will enable it to achieve the goals laid out in the agreement. Although the Bay Program has adopted 10 keystone commitments to focus the partners’ efforts and developed several planning documents, these plans are sometimes inconsistent with each other. Furthermore, the Bay Program is limited in its ability to strategically target resources because it has no assurance about the level of funds that may be available beyond the short term. According to Bay Program officials, they recognize that inconsistent strategies have been developed and are currently determining how to reconcile these various strategies. Chesapeake 2000 and prior agreements have provided the overall direction for the restoration effort over the past two decades. However, the Bay Program generally lacks a comprehensive, coordinated implementation strategy that could provide a road map for accomplishing the goals outlined in the agreement. Several Bay Program partners we interviewed expressed frustration because the Bay Program has not developed a clear, realistic plan for how it will meet the restoration goals. For example, a signatory to the Chesapeake Bay agreements noted that while Chesapeake 2000 contains the correct goals and appropriately identifies actions needed to restore the bay, the Bay Program does not have a plan in place that will allow the program to meet these goals. Similarly, a federal partner in the effort expressed frustration with the Chesapeake Executive Council for not convening a meeting of partners after the agreement was signed to decide how to proceed with the restoration effort and for not having a clear, overall plan for achieving program goals. According to one state partner, there is no clear strategy for how the restoration goals should be achieved, and such a strategy is needed to help ensure better progress toward achieving the Chesapeake 2000 commitments. Recognizing that it could not effectively manage all 102 commitments outlined in Chesapeake 2000, in 2003, the Bay Program adopted 10 keystone commitments as a management strategy to focus the partners’ efforts. The program believes that these commitments, if accomplished, will provide the greatest benefit to the bay. These commitments include the following: By 2010, achieve, at a minimum, a tenfold increase in native oysters in the Chesapeake Bay, based upon a 1994 baseline. By 2007, revise and implement existing fisheries management plans to incorporate ecological, social, and economic considerations; multispecies fisheries management; and ecosystem approaches. By 2002, implement a strategy to accelerate protection and restoration of submerged aquatic vegetation beds in areas of critical importance to the bay's living resources. By 2010, work with local governments, community groups, and watershed organizations to develop and implement locally supported watershed management plans in two-thirds of the bay watershed covered by the agreement. These plans would address the protection, conservation, and restoration of stream corridors, riparian forest buffers, and wetlands for the purposes of improving habitat and water quality, with collateral benefits for optimizing stream flow and water supply. By 2010, achieve a net resource gain by restoring 25,000 acres of tidal and nontidal wetlands. Conserve existing forests along all streams and shorelines. By 2010, correct the nutrient- and sediment-related problems in the Chesapeake Bay and its tidal tributaries sufficiently to remove the bay and the tidal portions of its tributaries from the list of impaired waters under the Clean Water Act. Strengthen programs for land acquisition and preservation within each state that are supported by funding and target the most valued lands for protection. Permanently preserve from development 20 percent of the land area in the watershed by 2010. By 2012, reduce the rate of harmful sprawl development of forest and agricultural land in the Chesapeake Bay watershed by 30 percent measured as an average over 5 years from the baseline of 1992-97, with measures and progress reported regularly to the Chesapeake Executive Council. Beginning with the class of 2005, provide a meaningful bay or stream outdoor experience for every school student in the watershed before graduation from high school. To achieve the 10 keystone commitments, the Bay Program has developed numerous planning documents, such as subcommittee and work group plans, state tributary strategies, and species-specific management plans. These planning documents, however, are not always consistent with each other. For example, a work group of the Bay Program’s Living Resources Subcommittee developed a strategy for restoring 25,000 acres of wetlands by 2010—a commitment under the Vital Habitat Protection and Restoration goal. This plan, developed in 2000, describes a strategy of restoring 2,500 acres per year through 2010. Subsequently, each state within the bay watershed and the District of Columbia developed a tributary strategy that describes the actions needed to achieve and maintain nitrogen and phosphorus load reductions necessary to remove the bay and its tributaries from the impaired waters list by 2010—a commitment under the Water Quality Protection and Restoration goal. In these strategies, the states describe actions for restoring over 200,000 acres of wetlands—far exceeding the 25,000 acres that the Bay Program has developed strategies for restoring. Similarly, a work group of the Nutrient Subcommittee developed a plan in 2004 to restore at least 10,000 miles of forest buffers by 2010—a commitment under the Vital Habitat Protection and Restoration goal. However, the tributary strategies developed by Pennsylvania and Virginia describe actions to restore a total of about 45,000 miles of forest buffers by 2010—more than four times the amount called for in the Bay Program’s plan. While we recognize the partners have the freedom to develop higher targets than established by the Bay Program, having such varying targets causes confusion, not only for the partners, but other stakeholders regarding what actions are actually needed to restore the bay. Moreover, such an approach appears to contradict the underlying principles of the partnership that was formed because the partners recognized that a cooperative approach was needed. According to the Chesapeake Bay Program Office, the program recognizes that inconsistent strategies have been developed and is now determining how to reconcile these various strategies. The officials also noted that some strategies, like the tributary strategies, have only recently been developed and the partners did not realize, until these strategies were developed, the extent of the additional work that would be required to meet the water quality commitments in Chesapeake 2000. Since 2000, Bay Program partners have devoted a significant amount of their limited resources to developing strategies for achieving the commitments outlined in Chesapeake 2000. However, as various partners have acknowledged, several of these strategies are either not being used by the Bay Program or are believed to be unachievable within the 2010 time frame. According to a Bay Program official, some work groups have invested significant resources in developing detailed plans for accomplishing specific commitments, but after the plans were developed, the program realized it had no resources available to implement the plans. For example, the Toxics Subcommittee invested significant resources to develop a detailed toxics work plan for achieving the toxics commitments in Chesapeake 2000. Even though the Bay Program has not been able to implement this work plan as planned because personnel and funding have not been available, program officials told us that the plan is currently being revised. It is unclear to us why the program is investing additional resources to revise this plan when the necessary resources are not available to implement it, and it is not one of the keystone commitments. According to the Chair of the Toxics Subcommittee, the work groups are generally responsible for developing strategies for achieving the commitments in Chesapeake 2000 without knowing what level of resources will be available to implement the strategies. Strategies are often developed in this way because, according to a Bay Program official, while they know how much each partner has agreed to provide for the upcoming year, they do not know how much funding partners will provide in the future. This funding challenge was recognized by the Chesapeake Bay Watershed Blue Ribbon Finance Panel, which reported that no summary cost of all needed restoration activities is available. The panel also noted that the lack of adequate funding and implementation has left the Bay Program far short of its goals. Without knowing what funding will be available to accomplish restoration activities, the Bay Program is limited in its ability to target and direct funding toward those restoration activities that will be the most cost effective and beneficial. The Bay Program has also spent a significant amount of resources developing strategies that some partners believe are unachievable. For example, the Bay Program has developed an oyster management plan for its commitment to achieve, by 2010, a tenfold increase in oysters, based upon a 1994 baseline. Maryland and Virginia have also developed state- specific plans for implementing the strategies laid out in the oyster management plans. Although the Bay Program has developed these detailed strategies and implementation plans, it also states in the oyster management plan that it will be unlikely to achieve the commitment because of low abundance, degraded habitat, and disease. Several partners also told us that they believe that the oyster commitment will be impossible to achieve. Similarly, states have spent years developing tributary strategies, but several Bay Program partners have told us that these strategies are not feasible, particularly given current funding levels and time frames. A member of the implementation committee told us that, even if the necessary funding was provided, the Bay Program does not have the personnel or equipment needed to implement all of the strategies that have been developed. Furthermore, it is not possible to meet the commitment of removing the bay and its tributaries from the impaired waters list by 2010. According to several partners we spoke with, while point source reductions called for in these strategies are achievable, nonpoint sourcereductions are not. In addition, several partners told us that other goals are also unachievable. For example, several local government representatives told us that, overall, the Bay Program’s goals are unachievable. They believe that the lack of a realistic plan that is based on available resources has discouraged partners and stalled the restoration effort. The Chesapeake Bay Program Office recognizes that some of the plans that have been developed are unachievable but stated that the plans were developed to identify what actions will be needed to achieve the commitments of Chesapeake 2000. The office also recognizes that there is a fundamental gap between what needs to be done to achieve some of the commitments and what can be achieved with the current resources available. Chesapeake Bay Program Office officials noted that the development of an overall implementation plan that takes into account available resources had been discussed, but that no agreement could be reached among the partners. Restoring the Chesapeake Bay is a massive, complex, and difficult undertaking. The ultimate success of the restoration hinges on several factors, of which a well-coordinated and managed implementation approach is key. To its credit, the Bay Program has made significant strides in developing over 100 different measures of progress, publishing dozens of reports on the state of the bay, and creating several documents that lay out strategies for fulfilling commitments outlined in Chesapeake 2000 that are intended to move the Bay Program closer to meeting the overall restoration goals. However, despite the extensive efforts that have gone into managing the restoration program, the lack of (1) integrated approaches to measure overall progress, (2) independent and credible reporting mechanisms, and (3) coordinated implementation strategies is undermining the success of the restoration effort and potentially eroding public confidence and continued support. We believe that the combined impact of these deficiencies has already resulted in a situation in which the Bay Program cannot effectively present a clear and credible picture of what the restoration effort has achieved, what strategies will best further Chesapeake 2000’s restoration goals, and how limited resources should be channeled to develop and implement the most effective strategies. With over two decades of restoration experience to rely on, we believe that the Bay Program is well positioned to seriously reevaluate how it measures and reports on both restoration progress and the actual health status of the bay. Given the billions of dollars that have already been invested in this project and the billions more that are almost certainly needed, stakeholders and the public should have ready access to reliable information that presents an accurate assessment of restoration progress and the actual health status of the bay. Moreover, the long-term partnership is uniquely positioned to undertake a hard look at what strategies have been the most cost effective and beneficial to the restoration effort and use this information not only to inform their future actions but also to ensure that they are not developing strategies that will be at cross-purposes or develop unrealistic implementation plans that do not reflect available resources. To improve the methods used by the Bay Program to assess progress made on the restoration effort, we recommend that the Administrator of EPA instruct the Chesapeake Bay Program Office to complete its plans to develop and implement an integrated approach to assess overall restoration progress. In doing so, the Chesapeake Bay Program Office should ensure that this integrated approach clearly ties to the five broad restoration goals identified in Chesapeake 2000. To improve the effectiveness and credibility of the Bay Program’s reports on the health of the bay, we recommend that the Administrator of EPA instruct the Chesapeake Bay Program Office to take the following three actions to revise its reporting approach: include an assessment of the key ecological attributes that reflect the bay’s current health conditions, report separately on the health of the bay and on the progress made in implementing management actions, and establish an independent and objective reporting process. To ensure that the Bay Program is managed and coordinated effectively, we also recommend that the Administrator of EPA instruct the Chesapeake Bay Program Office to work with Bay Program partners to take the following two actions: develop an overall, coordinated implementation strategy that unifies the program’s various planning documents, and establish a means to better target its limited resources to ensure that the most effective and realistic work plans are developed and implemented. We provided a draft of this report to the signatories of the Chesapeake 2000 agreement—the Administrator of EPA; the Governors of Maryland, Pennsylvania, and Virginia; the Mayor of the District of Columbia; and the Executive Director of the Chesapeake Bay Commission—for their review and comment. EPA, Maryland, Virginia, the District of Columbia, and the Chesapeake Bay Commission generally concurred with the report’s findings and recommendations. Although Pennsylvania did not specifically comment on the report’s findings and recommendations, it noted—as did other commenters—that the Bay Program is undertaking actions to address the issues discussed in our report. We are encouraged that the signatories generally agree with our recommendations. Without such actions, we believe that the program will be unable to change the status quo and move forward in a more strategic and well-coordinated manner. In their written comments, all of the signatories also emphasized the importance of the tributary strategies developed by the states to the restoration effort. Virginia stated that these strategies will serve as the basis of the comprehensive implementation plan that we recommended, but noted that any regional implementation plan developed must provide states with the flexibility to operate within their own cultural, legal, and political environments. Maryland echoed this concern, stating that while a comprehensive, coordinated strategy is important, each jurisdiction must maintain the ability to implement strategies that it believes will be most successful in achieving the collective goal of reducing nutrient and sediment inputs into the Chesapeake Bay. We recognize the importance of the tributary strategies and agree that states need flexibility in implementing these strategies. However, we continue to believe that it is important to develop an overall, coordinated implementation strategy for the Bay Program that unifies the various planning documents developed. In its comments, EPA stated that the tributary strategies have been developed to guide the restoration effort to eventual success and indicated that the Bay Program is now aligning its management plans to take better advantage of available resources for the restoration effort. EPA also provided technical comments and clarifications that we incorporated, as appropriate. The signatories’ written comments are presented in appendixes VI through XI. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees; the Administrator of EPA; the Governors of Maryland, Pennsylvania, and Virginia; the Mayor of the District of Columbia; the Executive Director of the Chesapeake Bay Commission; and the Director of the Office of Management and Budget. We also will make copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix XII. We were asked to address several issues concerning the Chesapeake Bay Program’s (Bay Program) restoration effort. Specifically, we were asked to determine (1) the extent to which the Bay Program has established appropriate measures for assessing restoration progress, (2) the extent to which the reporting mechanisms the Bay Program uses clearly and accurately describe the bay’s overall health, (3) how much funding was provided for restoring the Chesapeake Bay for fiscal years 1995 through 2004 and for what purposes, and (4) how effectively the restoration effort is being coordinated and managed. To determine the extent to which the Bay Program has established appropriate measures for assessing restoration progress, we obtained documentation on the measures being used by the Bay Program to assess progress and their linkages to commitments in Chesapeake 2000. We analyzed these measures to determine which measures provide information about progress in achieving quantifiable commitments and which provide information needed to make management decisions. We also analyzed the measures to determine their appropriateness for measuring progress toward the quantifiable commitments. To determine the extent to which the reporting mechanisms the Bay Program uses clearly and accurately describe the bay’s overall health, we obtained a variety of reports issued by the Bay Program, including all of the State of the Chesapeake Bay reports. We analyzed these reports to identify the types of information included in the reports, the consistency of the information provided over time, and the format and presentation of the reports. We did not assess the reliability of the data provided in the reports. To identify the critical elements of effective assessment and reporting processes, pros and cons of different assessment and reporting processes, and alternative methods of measuring and reporting progress that may be applicable to the Chesapeake Bay restoration effort, we assembled a panel of recognized experts on the following environmental restoration topics: indicator development, modeling, methods for reporting restoration progress, watershed restoration, and ecosystem restoration. To identify experts on these topics, we used the “snowball” technique. We identified experts through a literature search and Internet search. As we contacted experts, we verified their independence from the Chesapeake Bay Program and asked for additional contacts of experts. We selected 60 environmental restoration experts as potential panelists. From these 60 experts, we chose the final eight panelists on the basis of the following criteria: (1) recommendations we received from others knowledgeable in the field of environmental restoration; (2) the individual’s area of expertise and experience; (3) the type of organization represented, including academic institutions, government, and private industry; and (4) geographic representation. (The names and affiliations of the panel members are listed in app. IV). On May 17, 2005, we held an all-day meeting with the eight panelists at our office in Washington, D.C. Before the meeting, we provided each panel member with a set of eight general discussion questions. At the end of each discussion, we asked the panelists to respond, using an anonymous ballot, to a set of questions that were based on the general discussion topics. We recorded and transcribed the meeting to ensure that we accurately captured the panel members’ statements. To obtain information on the funding provided for the restoration effort, we developed a data collection instrument that we distributed to key federal agencies; the states of Maryland, Pennsylvania, and Virginia; and the District of Columbia. Key federal agencies were identified as those that participated in high-level Chesapeake Bay Program committees or that provided more than $250,000 annually, on average, in direct funding. For the purposes of this report, we defined direct funds as those that were provided exclusively for bay restoration (e.g., increasing the oyster population) or those that would no longer be made available in the absence of the restoration effort. To make the comparison more meaningful, we present funding data in constant 2004 dollars. Unless otherwise noted, all figures are obligation amounts and include administrative costs. We reviewed the data from the federal agencies and states for consistency and reliability and, when possible, compared the data with data from other sources, such as data collected by the Environmental Protection Agency (EPA) and the Chesapeake Bay Commission. After reviewing the data and comparing it with other sources, we sent the data back to the federal agencies and states for verification and updates as needed. In addition, we asked for explanations of any inconsistencies that we identified. After receiving the verified/updated data, we once again reviewed the data for consistency and reliability. Finally, we contacted the agencies and states with any outstanding questions concerning the data and conducted additional data reliability checks. To determine how effectively the restoration effort is being coordinated and managed, we obtained documentation on the organizational structure of the program, the roles and responsibilities of the committees and subcommittees, and planning documents developed to address the commitments. We analyzed the planning documents for consistency and thoroughness. In addition, we obtained information on the status of keystone and other commitments. To obtain EPA’s insights on all four objectives, we met with officials from the Chesapeake Bay Program Office to discuss its monitoring and assessment, reporting, funding, and coordination and management responsibilities. Through these discussions, we obtained an array of documents and perspectives related to all four objectives. To obtain insights from the other signatories of the Chesapeake Bay agreements, we met with officials from the Chesapeake Bay Commission, the District of Columbia, and the states of Maryland, Pennsylvania, and Virginia. Through these efforts, we obtained documents and information related to all four objectives. To obtain insights from other federal partners to the Bay Program, we met with officials from the Departments of Agriculture, Commerce, Defense, and the Interior. To obtain insights from academic partners to the Bay Program, we met with officials from the Chesapeake Research Consortium, College of William and Mary’s Virginia Institute of Marine Science, Smithsonian Environmental Research Center, and University of Maryland’s Center for Environmental Science. To obtain insights from other Bay Program partners, we met with the Alliance for the Chesapeake Bay, Chesapeake Bay Foundation, and the Metropolitan Washington Council of Governments. We also met with officials from nonpartner organizations, such as the Maryland Watermen’s Association and the Northeast-Midwest Institute. We conducted our review from October 2004 through October 2005 in accordance with generally accepted government auditing standards. Chesapeake 2000 contains five broad goals and 102 commitments that the partners have agreed to accomplish. These goals and commitments are listed below. Restore, enhance and protect the finfish, shellfish and other living resources, their habitats and ecological relationships to sustain all fisheries and provide for a balanced ecosystem. By 2010, achieve, at a minimum, a tenfold increase in native oysters in the Chesapeake Bay, based upon a 1994 baseline. By 2002, develop and implement a strategy to achieve this increase by using sanctuaries sufficient in size and distribution, aquaculture, continued disease research and disease-resistant management strategies, and other management approaches. In 2000, establish a Chesapeake Bay Program Task Force to work cooperatively with the U.S. Coast Guard, the ports, the shipping industry, environmental interests, and others at the national level to help establish and implement a national program designed to substantially reduce and, where possible, eliminate the introduction of non-native species carried in ballast water; and by 2002, develop and implement an interim voluntary ballast water management program for the waters of the bay and its tributaries. By 2001, identify and rank non-native, invasive aquatic and terrestrial species, which are causing or have the potential to cause significant negative impacts to the bay’s aquatic ecosystem. By 2003, develop and implement management plans for those species deemed problematic to the restoration and integrity of the bay’s ecosystem. By June 2002, identify the final initiatives necessary to achieve our existing goal of restoring fish passage for migratory fish to more than 1,357 miles of currently blocked river habitat by 2003 and establish a monitoring program to assess outcomes. By 2002, set a new goal with implementation schedules for additional migratory and resident fish passages that addresses the removal of physical blockages. In addition, the goal will address the removal of chemical blockages caused by acid mine drainage. Projects should be selected for maximum habitat and stock benefit. By 2002, assess trends in populations for priority migratory fish species. Determine tributary-specific target population sizes based upon projected fish passage, and current and projected habitat available, and provide recommendations to achieve those targets. By 2003, revise fish management plans to include strategies to achieve target population sizes of tributary-specific migratory fish. By 2004, assess the effects of different population levels of filter feeders such as menhaden, oysters, and clams on bay water quality and habitat. By 2005, develop ecosystem-based multispecies management plans for targeted species. By 2007, revise and implement existing fisheries management plans to incorporate ecological, social, and economic considerations, multispecies fisheries management and ecosystem approaches. By 2001, establish harvest targets for the blue crab fishery and begin implementing complementary state fisheries management strategies baywide. Manage the blue crab fishery to restore a healthy spawning biomass, size, and age structure. Preserve, protect, and restore those habitats and natural areas that are vital to the survival and diversity of the living resources of the bay and its rivers. Recommit to the existing goal of protecting and restoring 114,000 acres of submerged aquatic vegetation (SAV). By 2002, revise SAV restoration goals and strategies to reflect historic abundance, measured as acreage and density from the 1930s to the present. The revised goals will include specific levels of water clarity that are to be met in 2010. Strategies to achieve these goals will address water clarity, water quality, and bottom disturbance. By 2002, implement a strategy to accelerate protection and restoration of SAV beds in areas of critical importance to the bay's living resources. By 2010, work with local governments, community groups, and watershed organizations to develop and implement locally supported watershed management plans in two-thirds of the bay watershed covered by the agreement. These plans would address the protection, conservation, and restoration of stream corridors, riparian forest buffers, and wetlands for the purposes of improving habitat and water quality, with collateral benefits for optimizing stream flow and water supply. By 2001, each jurisdiction will develop guidelines to ensure the aquatic health of stream corridors. Guidelines should consider optimal surface and groundwater flows. By 2002, each jurisdiction will work with local governments and communities that have watershed management plans to select pilot projects that promote stream corridor protection and restoration. By 2003, include the State of the Bay report, and make available to the public, local governments, and others, information concerning the aquatic health of stream corridors based on adopted regional guidelines. By 2004, each jurisdiction, working with local governments, community groups, and watershed organizations, will develop stream corridor restoration goals based on local watershed management planning. Achieve a no-net loss of existing wetlands acreage and function in the signatories' regulatory programs. By 2010, achieve a net resource gain by restoring 25,000 acres of tidal and nontidal wetlands. To do this, the signatories to the agreement commit to achieve and maintain an average restoration rate of 2,500 acres per year basin wide by 2005 and beyond. They will evaluate their success in 2005. Provide information and assistance to local governments and community groups for the development and implementation of wetlands preservation plans as a component of a locally based integrated watershed management plan. Establish a goal of implementing the wetlands plan component in 25 percent of the land area of each state's bay watershed by 2010. The plans would preserve key wetlands while addressing surrounding land use so as to preserve wetland functions. Evaluate the potential impact of climate change on the Chesapeake Bay watershed, particularly with respect to its wetlands, and consider potential management options. By 2002, ensure that measures are in place to meet the riparian forest buffer restoration goal of 2,010 miles by 2010. By 2003, establish a new goal to expand buffer mileage. Conserve existing forests along all streams and shorelines. Promote the expansion and connection of contiguous forests through conservation easements, greenways, purchase, and other land conservation mechanisms. Achieve and maintain the water quality necessary to support the aquatic living resources of the bay and its tributaries and to protect human health. Continue efforts to achieve and maintain the 40 percent nutrient reduction goal agreed to in 1987, as well as the goals being adopted for the tributaries south of the Potomac River. By 2010, correct the nutrient- and sediment-related problems in the Chesapeake Bay and its tidal tributaries sufficiently to remove the bay and the tidal portions of its tributaries from the list of impaired waters under the Clean Water Act. In order to achieve this: By 2001, define the water quality conditions necessary to protect aquatic living resources and then assign load reductions for nitrogen and phosphorus to each major tributary; Using a process parallel to that established for nutrients, determine the sediment load reductions necessary to achieve the water quality conditions that protect aquatic living resources, and assign load reductions for sediment to each major tributary by 2001; By 2002, complete a public process to develop and begin implementation of revised Tributary Strategies to achieve and maintain the assigned loading goals; By 2003, the jurisdictions with tidal waters will use their best efforts to adopt new or revised water quality standards consistent with the defined water quality conditions. Once adopted by the jurisdictions, EPA will work expeditiously to review the new or revised standards, which will then be used as the basis for removing the bay and its tidal rivers from the list of impaired waters; and By 2003, work with the Susquehanna River Basin Commission and others to adopt and begin implementing strategies that prevent the loss of the sediment retention capabilities of the lower Susquehanna River dams. The signatories commit to fulfilling the 1994 goal of a Chesapeake Bay free of toxics by reducing or eliminating the input of chemical contaminants from all controllable sources to levels that result in no toxic or bioaccumulative impact on the living resources that inhabit the bay or on human health. By fall of 2000, reevaluate and revise, as necessary, the “Chesapeake Bay Basinwide Toxics Reduction and Prevention Strategy” focusing on: Complementing state and federal regulatory programs to go beyond traditional point source controls, including nonpoint sources such as groundwater discharge and atmospheric deposition, by using a watershed-based approach; and Understanding the effects and impacts of chemical contaminants to increase the effectiveness of management actions. Through continual improvement of pollution prevention measures and other voluntary means, strive for zero release of chemical contaminants from point sources, including air sources. Particular emphasis shall be placed on achieving, by 2010, elimination of mixing zones for persistent or bioaccumulative toxics. Reduce the potential risk of pesticides to the bay by targeting education, outreach, and implementation of integrated pest management and specific best management practices on those lands that have higher potential for contributing pesticide loads to the bay. Support the restoration of the Anacostia River, Baltimore Harbor, and Elizabeth River and their watersheds as models for urban river restoration in the bay basin. By 2010, the District of Columbia, working with its watershed partners, will reduce pollution loads to the Anacostia River in order to eliminate public health concerns and achieve the living resource, water quality, and habitat goals of the current and past agreements. By 2003, assess the effects of airborne nitrogen compounds and chemical contaminants on the bay ecosystem and help establish reduction goals for these contaminants. By 2003, establish appropriate areas within the Chesapeake Bay and its tributaries as “no discharge zones” for human waste from boats. By 2010, expand by 50 percent the number and availability of waste pump- out facilities. By 2006, reassess progress in reducing the impact of boat waste on the bay and its tributaries. This assessment will include evaluating the benefits of further expanding no discharge zones, as well as increasing the number of pump-out facilities. Develop, promote, and achieve sound land use practices which protect and restore watershed resources and water quality, maintain reduced pollutant loadings for the bay and its tributaries, and restore and preserve aquatic living resources. By 2001, complete an assessment of the bay’s resource lands, including forests and farms, emphasizing their role in the protection of water quality and critical habitats, as well as cultural and economic viability. Provide financial assistance or new revenue sources to expand the use of voluntary and market-based mechanisms such as easements, purchase, or transfer of development rights and other approaches to protect and preserve natural resource lands. Strengthen programs for land acquisition and preservation within each state that are supported by funding and target the most valued lands for protection. Permanently preserve from development 20 percent of the land area in the watershed by 2010. Provide technical and financial assistance to local governments to plan for or revise plans, ordinances, and subdivision regulations to provide for the conservation and sustainable use of the forest and agricultural lands. In cooperation with local governments, develop and maintain in each jurisdiction a strong geographic information system to track the preservation of resource lands and support the implementation of sound land use practices. By 2012, reduce the rate of harmful sprawl development of forest and agricultural land in the Chesapeake Bay watershed by 30 percent measured as an average over 5 years from the baseline of 1992-97, with measures and progress reported regularly to the Chesapeake Executive Council. By 2005, in cooperation with local government, identify and remove state and local impediments to low impact development designs to encourage the use of such approaches and minimize water quality impacts. Work with communities and local governments to encourage sound land use planning and practices that address the impacts of growth, development, and transportation on the watershed. By 2002, review tax policies to identify elements that discourage sustainable development practices or encourage undesirable growth patterns. Promote the modification of such policies and the creation of tax incentives that promote the conservation of resource lands and encourage investments consistent with sound growth management principles. The jurisdictions will promote redevelopment and remove barriers to investment in underutilized urban, suburban, and rural communities by working with localities and development interests. By 2002, develop analytical tools that will allow local governments and communities to conduct watershed-based assessments of the impacts of growth, development, and transportation decisions. By 2002, compile information and guidelines to assist local governments and communities to promote ecologically-based designs in order to limit impervious cover in undeveloped and moderately developed watersheds and reduce the impact of impervious cover in highly developed watersheds. Provide information to the development community and others so they may champion the application of sound land use practices. By 2003, work with local governments and communities to develop land- use management and water resource protection approaches that encourage the concentration of new residential development in areas supported by adequate water resources and infrastructure to minimize impacts on water quality. By 2004, the jurisdictions will evaluate local implementation of stormwater, erosion control, and other locally-implemented water quality protection programs that affect the bay system and ensure that these programs are being coordinated and applied effectively in order to minimize the impacts of development. Working with local governments and others, develop and promote wastewater treatment options, such as nutrient reducing septic systems, which protect public health and minimize impacts to the bay’s resources. Strengthen brownfield redevelopment. By 2010, rehabilitate and restore 1,050 brownfield sites to productive use. Working with local governments, encourage the development and implementation of emerging urban stormwater retrofit practices to improve their water quantity and quality function. By 2002, the signatory jurisdictions will promote coordination of transportation and land use planning to encourage compact, mixed use development patterns, revitalization in existing communities and transportation strategies that minimize adverse effects on the bay and its tributaries. By 2002, each state will coordinate its transportation policies and programs to reduce the dependence on automobiles by incorporating travel alternatives such as telework, pedestrian, bicycle, and transit options, as appropriate, in the design of projects so as to increase the availability of alternative modes of travel as measured by increased use of those alternatives. Consider the provisions of the federal transportation statutes for opportunities to purchase easements to preserve resource lands adjacent to rights of way and special efforts for stormwater management on both new and rehabilitation projects. Establish policies and incentives that encourage the use of clean vehicle and other transportation technologies that reduce emissions. By 2010, expand by 30 percent the system of public access points to the bay, its tributaries, and related resource sites in an environmentally sensitive manner by working with state and federal agencies, local governments, and stakeholder organizations. By 2005, increase the number of designated water trails in the Chesapeake Bay region by 500 miles. Enhance interpretation materials that promote stewardship at natural, recreational, historical, and cultural public access points within the Chesapeake Bay watershed. By 2003, develop partnerships with at least 30 sites to enhance place-based interpretation of bay-related resources and themes and stimulate volunteer involvement in resource restoration and conservation. Promote individual stewardship and assist individuals, community-based organizations, businesses, local governments, and schools to undertake initiatives to achieve the goals and commitments of the agreement. Make education and outreach a priority in order to achieve public awareness and personal involvement on behalf of the bay and local watersheds. Provide information to enhance the ability of citizen and community groups to participate in bay restoration activities on their property and in their local watershed. Expand the use of new communications technologies to provide a comprehensive and interactive source of information on the Chesapeake Bay and its watershed for use by public and technical audiences. By 2001, develop and maintain a Web-based clearinghouse of this information specifically for use by educators. Beginning with the class of 2005, provide a meaningful bay or stream outdoor experience for every school student in the watershed before graduation from high school. Continue to forge partnerships with the Department of Education and institutions of higher learning in each jurisdiction to integrate information about the Chesapeake Bay and its watershed into school curricula and university programs. Provide students and teachers alike with opportunities to directly participate in local restoration and protection projects, and to support stewardship efforts in schools and on school property. By 2002, expand citizen outreach efforts to more specifically include minority populations by, for example, highlighting cultural and historical ties to the bay, and providing multicultural and multilingual educational materials on stewardship activities and bay information. Jurisdictions will work with local governments to identify small watersheds where community-based actions are essential to meeting bay restoration goals—in particular wetlands, forested buffers, stream corridors, and public access and work with local governments and community organizations to bring an appropriate range of Bay Program resources to these communities. Enhance funding for locally based programs that pursue restoration and protection projects that will assist in the achievement of the goals of this and past agreements. By 2001, develop and maintain a clearinghouse for information on local watershed restoration efforts, including financial and technical assistance. By 2002, each signatory jurisdiction will offer easily-accessible information suitable for analyzing environmental conditions at a small watershed scale. Strengthen the Chesapeake Bay Program’s ability to incorporate local governments into the policy decision making process. By 2001, complete a reevaluation of the Local Government Participation Action Plan and make necessary changes in Bay Program and jurisdictional functions based upon the reevaluation. Improve methods of communication with and among local governments on bay issues and provide adequate opportunities for discussion of key issues. By 2001, identify community watershed organizations and partnerships. Assist in establishing new organizations and partnerships where interest exists. These partners will be important to successful watershed management efforts in distributing information to the public, and engaging the public in the bay restoration and preservation effort. By 2005, identify specific actions to address the challenges of communities where historically poor water quality and environmental conditions have contributed to disproportional health, economic, or social impacts. By 2002, each signatory will put in place processes to: Ensure that all properties owned, managed, or leased by the signatories are developed, redeveloped, and used in a manner consistent with all relevant goals, commitments, and guidance of the agreement. Ensure that the design and construction of signatory-funded development and redevelopment projects are consistent with all relevant goals, commitments, and guidance of the agreement. Expand the use of clean vehicle technologies and fuels on the basis of emission reductions, so that a significantly greater percentage of each signatory government’s fleet of vehicles use some form of clean technology. By 2001, develop an Executive Council Directive to address stormwater management to control nutrient, sediment, and chemical contaminant runoff from state, federal, and District of Columbia-owned land. Strengthen partnerships with Delaware, New York, and West Virginia by promoting communication and by seeking agreements on issues of mutual concern. Work with nonsignatory bay states to establish links with community-based organizations throughout the bay watershed. The Chesapeake Bay Program (Bay Program) is a regional partnership that includes many partners, including federal agencies, states, a tristate legislative commission, academic institutions, and others. As noted below, six of the partners are signatories to the Chesapeake Bay agreements. The six signatories make up the Chesapeake Executive Council, which meets annually to establish policy direction for the Bay Program. Cooperative State Research, Education and Extension Service National Oceanic and Atmospheric Administration U.S. Department of the Air Force U.S. Department of the Army U.S. Department of the Navy U.S. Environmental Protection Agency (Signatory) District of Columbia (Signatory) Maryland (Signatory) Pennsylvania (Signatory) Virginia (Signatory) Chesapeake Bay Commission (Signatory) College of William and Mary Virginia Institute of Marine Science Cornell Cooperative Extension (New York) This appendix provides the names and affiliations of our expert panel members and summarizes the discussions held at an all-day meeting. The information presented in this appendix may not represent the views of every panel member and should not be considered to be the views of GAO. The following individuals were members of the GAO expert panel on the Chesapeake Bay restoration effort: Allan, J. David, Professor, School of Natural Resources & Environment, Harwell, Mark, Professor, Florida A&M University; Gunderson, Lance, Associate Professor, Department of Environmental Studies, Emory University; Hill, Brian, Chief of the Watershed Research Branch, Mid-Continent Ecology Division, U.S. Environmental Protection Agency; Kusler, Jon, Executive Director, Association of State Wetland Managers; Nuttle, William, Consultant, Eco-Hydrology; Reed, Denise, Associate Professor, Department of Geology and Geophysics, University of New Orleans; and Stevenson, R. Jan, Professor, Department of Zoology, Michigan State University. On May 17, 2005, we held an all-day meeting with the eight panelists at our office in Washington, D.C. Before the meeting, we provided each panel member with background information on the Chesapeake Bay Program (Bay Program) and a set of eight general discussion questions. At the end of each discussion, we asked the panelists to respond, using an anonymous ballot, to a series of questions that were based on the general discussion topics. The eight discussion topics covered three overarching themes: (1) assessing the health status of an ecosystem, (2) reporting the health status of an ecosystem, and (3) assessing progress of a restoration effort. For the first theme of the day, the panelists spoke on three general discussion topics to identify the critical elements of an effective assessment process. Panelists agreed that identifying a core set of broad ecosystem characteristics is very important when assessing the health of an ecosystem and needs to be determined for each individual ecosystem. Our panel of experts did not identify these characteristics, saying instead that only experts on the Chesapeake Bay should do so. In assessing the health of an ecosystem, our panel said, bay experts should first gain an understanding of the desired end points—the particular characteristics of the system that end users deem important. However, the panel cautioned that the bay’s experts should identify a limited number of essential characteristics—about four to six. Experience in developing conceptual models for other ecosystems has shown that it is not possible to manage for 100 different characteristics. The Bay Program has over 100 specific indicators of various ecosystem characteristics. The panelists generally agreed that the Bay Program has the essential indicators that must be used at a minimum to assess the health of an ecosystem. The Bay Program has many indicators that measure individual aspects within the ecosystem, such as the oyster population. However, the Bay Program needs more indicators that provide information about the biological condition of the ecosystem as a whole and that reflect stress and response relationships. Then patterns and status can be determined and trends can be assessed. Criteria for selecting good environmental indicators are available in literature. The panel also noted that models are useful, but it is important to understand the intended use of the model and its limitations. The Bay Program’s predictive model is intended to help weigh alternative actions and determine how effective different management actions may be in restoring the ecosystem. The model can be used to make predictions about what the condition of the ecosystem may be in particular future years, and the Bay Program can then confirm those predictions with subsequent monitoring. The Bay Program should not use a predictive model to report on current conditions, which should be based on actual measurements. Panelists agreed that a limited number of integrated measures can be used to assess an ecosystem. A few integrated measures that describe the overall health of the system are valuable in making an overall assessment of the system and are well suited for reporting on the overall health. The overall health of a system can be described in a qualitative sense, with a grade for example. Overarching indicators can be used to assign grades to between four and six different ecological characteristics. For the second theme of the day, the panelists spoke on three general discussion topics to identify the critical elements of effective reporting. Panelists generally agreed that, based on information provided in the Bay Program reports, the public would probably not be able to clearly and accurately understand the health of the Chesapeake Bay. While panelists found the 2004 State of the Chesapeake Bay report visually appealing, they believed it lacked a clear, overall picture of system health. In addition, Bay Program reports emphasize health and management of the program in one document and are overly oriented to reporting on the progress of the program at the expense of communicating information of the health status of the bay. The panelists believed that an independent assessment of the bay’s health is probably necessary to provide a clear and accurate report on the status of the bay’s health. Panelists agreed that effective reports on the health of an ecosystem contain information that is relevant, accurate, timely, consistent, thorough, precise, objective, transparent, and peer reviewed or verified. Panelists noted that the strength of the Bay Program’s reports depends on the public’s perception of the Bay Program’s integrity and that, if the reports underwent an independent science review before publication, the public would have sufficient trust in the product so that other reports on the bay’s health, such as the Chesapeake Bay Foundation report, would not be perceived as needed. Panelists generally agreed that the report card method is effective for clearly and accurately reporting ecosystem health. Panelists also noted that it is important to distinguish between management initiatives to reduce stressors within the ecosystem and the biological effects of these initiatives and report on them separately. Instead, the Bay Program often mixes indicators, which causes confusion. A report on the health of the bay should give a measure for the current condition of each ecosystem attribute, such as a grade; an indication of the trend, such as an arrow; and summary text that explains what it all means. For the third theme of the day, the panelists spoke on two general discussion topics to identify how progress in restoring an ecosystem should be assessed. Chesapeake 2000 includes many commitments that are not quantifiable; instead, the commitments are focused on actions to strengthen, develop, or plan for various aspects of the restoration effort. Many of the commitments need to be refined so that they are quantifiable. Panelists noted, for example, that Chesapeake 2000 has a commitment to conserve existing forests along all streams and shorelines. The commitment raises questions about whether that means every single forest, a particular number of miles, or to prevent or manage the decline so that it is not more than a certain percentage per year. Panelists also pointed out that it is possible to have a program that is progressing very well from a management perspective but is not showing any evidence of cleanup toward the restoration goals. They cited three signs of progress: programmatic progress, progress in reducing stressors to the ecosystem, and progress in achieving desired ecological outcomes. The Bay Program has mixed these measures of progress and has used programmatic progress to imply that the program is achieving ecological outcomes. The panelists agreed that external factors that affect the health of an ecosystem, such as weather and population growth, should be incorporated into an assessment of restoration progress. Similarly, actions taken to restore the ecosystem, such as the implementation of agricultural best management practices, that may not have an impact of the ecosystem for several years should be incorporated into an assessment of progress made in restoring an ecosystem. Panelists also agreed that reports on the health of an ecosystem should be distinctly separate from reports on restoration progress. W. Tayloe Murphy, Jr. Director, Natural Resources and Environment United State Government Accountability Office (GAO) On behalf of Governor Warner, I want to thank you for the opportunity to comment on the Draft Report (“the Report”) on the Chesapeake Bay Program prepared by the United States Government Accountability Office (“GAO”). The Report contains three primary recommendations. First, it calls on the Administrator of the United States Environmental Protection Agency (“EPA”) to direct the Chesapeake Bay Program (“the Program”) to “complete its efforts to develop and implement an integrated assessment approach.” Although I agree that the Program should complete this task, I believe that any such assessment must be developed with the understanding that the Chesapeake Bay watershed is a complex ecosystem. As you know, the Program is the collective effort of the signatories to the Chesapeake Bay Agreement of 2000, and in some cases the headwater states of Delaware, New York and West Virginia. These partners strive to present to the public through a single voice the condition of the Bay in the most understandable terms possible; however, it is oftentimes difficult to express complex ecological interactions in overly simple terms. Virginia, together with its Bay partners, will continue to support scientifically defensible measures of ecosystem health that can be accurately communicated, and we will offer the expert advice and guidance of our agencies in this effort. The second recommendation calls upon the Program to revise its reporting approach. Since 2004, the Program has been moving in the direction suggested by GAO, and it continues to refine its reporting to better serve the public and policy makers. I cannot agree with the representation made by two of the signatories, as stated on page 23 of the Report, that all of the Program partners “find it advantageous” to give a rosier view of the Bay’s health than conditions warrant. In Virginia, it has been our policy and practice to be honest with the public and policy makers regarding the degraded condition of the Chesapeake Bay. When there is good news to report, we report it, but we have not been shy when reporting bad news as well. I am also concerned about the frequent allegation that the information presented by the Program is not “credible.” The Report does not suggest that information presented by the Program is not accurate, but rather that it is sometimes presented in an improper context, or in a manner that confuses different types of data. I hope that GAO will review its comments on credibility with this observation in mind and that it will not leave the reader with the impression that the public has been intentionally mislead or that the data presented by the Program meets anything but the highest scientific standards. Finally, GAO recommends that the program develop a “comprehensive implementation plan that takes into account available resources.” I would argue that that the tributary strategies developed independently by each of the Bay partners (signatories and head water states), and the implementation activities associated with them, will serve as the basis for the plan that GAO proposes. I hope that readers of the report will understand that Virginia has begun implementation of our tributary strategies. For point sources, we have instituted a comprehensive regulatory management program that will reduce and cap nutrient discharges from sewage treatment plants and industrial facilities. We have reinvigorated our grant program to assist municipal facilities with the cost of upgrades. With respect to non-point sources, we are making significant strides in addressing urban storm water management, and we are working closely with our farmers to reduce the adverse impacts to water quality that result from a variety of agricultural practices. We are also seeking consistent funding for our agricultural grant programs. Moreover, we fully recognize that our tributary strategies are not static documents, and we are committed to making changes and revisions to them in order to adapt to new circumstances and resources as we continue to implement these strategies. The Commonwealth of Virginia certainly supports thoughtful and achievable implementation plans developed through the Program partnership; however, we believe that the states must be given the flexibility to operate within their own cultural, legal and political environments. The implementation path Virginia chooses must be accomplished in the context of our state law and budgets, and any regional implementation plan must reflect this reality. I would also suggest that this recommendation highlights the significant role that the federal government must continue to play in the Bay partnership. In the current fiscal year the Governor and the Virginia General Assembly, working together, made the largest appropriation to the Water Quality Improvement Fund in our history. Maryland has begun collecting the Chesapeake Bay Restoration Fee, and Pennsylvania has passed Growing Greener II. These actions are resulting in multi-million dollar investments in water quality by the states at this time, and we will work to insure that it continues in the coming years. We hope that our federal partner will also step up its commitment to match this unprecedented level of state support. The restoration of the Chesapeake Bay will not be easy or cheap. The partners are engaged in a long-term enterprise that will only be successful through the full participation of federal, state and local governments, as well as the private sector. I appreciate the time and thought that went into the development of the helpful recommendations by GAO, and I look forward to the implementation of those recommendations. I also look forward to the positive results that can occur only with the continuation of the partnership embodied in the Program. Thank you again for giving me the opportunity to comment on the Report. If I can be of further assistance, please do not hesitate to contact me. W. Tayloe Murphy, Jr. In addition to the contact named above, Sherry McDonald, Assistant Director; Bart Fischer; James Krustapentus; and Barbara Patterson made key contributions to this report. Also contributing to this report were Mark Braza, Liz Curda, Anne Inserra, Lynn Musser, Mehrzad Nadji, Carol Herrnstadt Shulman, and Amy Webbink.
The Chesapeake Bay Program (Bay Program) was created in 1983 when Maryland, Pennsylvania, Virginia, the District of Columbia, the Chesapeake Bay Commission, and EPA agreed to establish a partnership to restore the Chesapeake Bay. Their most recent agreement, Chesapeake 2000, sets out an agenda and five broad goals to guide these efforts through 2010 and contains 102 commitments that the partners agreed to accomplish. GAO was asked to examine (1) the extent to which appropriate measures for assessing restoration progress have been established, (2) the extent to which current reporting mechanisms clearly and accurately describe the bay's overall health, (3) how much funding was provided for the effort for fiscal years 1995 through 2004, and (4) how effectively the effort is being coordinated and managed. The Bay Program has over 100 measures to assess progress toward meeting certain restoration commitments and providing information to guide management decisions. However, the program has not yet developed an integrated approach that would allow it to translate these individual measures into an assessment of overall progress toward achieving the five broad restoration goals outlined in Chesapeake 2000. For example, while the Bay Program has appropriate measures to track crab, oyster, and rockfish populations, it does not have an approach for integrating the results of these measures to assess progress toward the agreement's goal of protecting and restoring the bay's living resources. The Bay Program has recognized that it may need an integrated approach for assessing overall progress in restoring the bay and, in November 2004, a task force began working on this effort. The State of the Chesapeake Bay reports are the Bay Program's primary mechanism for reporting the current health status of the bay. However, these reports do not effectively communicate the bay's current conditions because they focus on the status of individual species or pollutants instead of providing information on a core set of ecosystem characteristics. Moreover, the credibility of these reports has been negatively impacted because the program has commingled various kinds of data such as monitoring data, results of program actions, and the results of its predictive model without clearly distinguishing among them. As a result, the public cannot easily determine whether the health of the bay is improving or not. Moreover, the lack of independence in the Bay Program's reporting process has led to negative trends being downplayed and a rosier picture of the bay's health being reported than may have been warranted. The program has recognized that improvements are needed and is developing new reporting formats. From fiscal years 1995 through 2004, the restoration effort received about $3.7 billion in direct funding from 11 key federal agencies; the states of Maryland, Pennsylvania, and Virginia; and the District of Columbia. These funds were used for activities that supported water quality protection and restoration, sound land use, vital habitat protection and restoration, living resource protection and restoration, and stewardship and community engagement. During this time period, the restoration effort also received an additional $1.9 billion in indirect funding. The Bay Program does not have a comprehensive, coordinated implementation strategy to better enable it to achieve the goals outlined in Chesapeake 2000. Although the program has adopted 10 key commitments to focus partners' efforts and developed plans to achieve them, some of these plans are inconsistent with each other or are perceived as unachievable by program partners. The limited assurances about the availability of resources beyond the short term further complicate the Bay Program's ability to effectively coordinate restoration efforts and strategically manage its resources.
Before turning to enforcement in particular, I will discuss some broad principles of budget process since it is the framework within which enforcement mechanisms exist. No process can force choices Congress and the President are unwilling to make. Having an agreed-upon goal justifies and frames the choices that must be made. A budget process can facilitate or hamper substantive decisions, but it cannot replace them. While no process can substitute for making the difficult choices, it can help structure the debate. The budget structure can make clear information necessary for important decisions or the structure can make some information harder to find. The process can highlight trade-offs and set rules for action. In our past work, we have identified four broad principles or criteria for a budget process that can help Congress consider the design and structure of future budget enforcement mechanisms. A process should 1. provide information about the long-term effect of decisions, both macro—linking fiscal policy to the long-term economic outlook—and micro—providing recognition of the long-term spending implications of government commitments 2. provide information and be structured to focus on important trade-offs such as the trade-off between investment and consumption spending. 3. provide information necessary to make informed trade-offs between the different policy tools of government (such as tax provisions, grants, and credit programs), and 4. be enforceable, provide for control and accountability, and be transparent, using clear, consistent definitions. Since my comments about enforcement will be related in part to these four principles, let me touch briefly on each of them. First, selecting the appropriate time horizon in which the budgetary impact of policy decisions should be measured is not just an abstract question for analysts. If the time horizon is too short, Congress may have insufficient information about the potential cost of a program. In addition, too short a time horizon may create incentives to artificially shift costs into the future rather than find a sustainable solution. The move from a focus on a single year to 5 and then 10-year horizons represented a major step forward. At the same time, we need to also understand the longer- term effects of policy decisions. As the first agency to do long-term simulations for the federal budget as a whole, we are well aware of the fact that the further out estimates go, the less certain are the numbers. But policymakers should be given information on the direction and order of magnitude of looming challenges. This is especially important where the short-term snapshot may be misleading. This concern has led us to propose improved recognition of the government’s long-term “fiscal exposures”—which may not be explicit liabilities. Second, the structure and rules can determine the nature of the trade-offs surfaced during the budget process. Consumption may be favored over investment because the initial cost of an infrastructure project looks high in comparison to support for consumption. Distinguishing between support for current consumption and investing in economic growth in the budget would help eliminate a perceived bias against investments requiring large up-front spending. We have previously proposed establishing an investment component within the unified budget to permit a focus on federal spending on infrastructure, research and development, and human capital—spending intended to promote the nation’s long-term economic growth. This proposal focuses on the allocation of spending within an agreed-upon amount. For example, we identified several options such as establishing investment targets within a framework similar to that contained in the Budget Enforcement Act of 1990. Under such an approach, and the Administration would agree on the appropriate level of investment spending within an overall target and create targets or “fire walls” to limit infringement from other activities. The third principle focuses on the method through which the federal government provides support for any federal goal or objective. The renewed interest in overlap and duplication has highlighted the different ways in which such support is provided: direct federal provision, grants, loans or loan guarantees and tax preferences or tax incentives. These vary in design and in how effective they might be for a given mission. In addition, they vary in the timing of cash flows. The budget and budget process should provide the information necessary to permit looking across federal agencies and policy tools—which means across committee jurisdictions—to make an informed choice. Such comparisons also require that their budgetary costs be measured on a comparable basis. The Federal Credit Reform Act of 1990 addressed this issue for loans and loan guarantees; the budget now reflects the estimated size of the government’s commitment, regardless of the timing of the cash flows. For federal insurance programs, however, the budget offers a misleading picture about the nature and size of the government’s exposure. The cash-based treatment of these programs distorts choice on several dimensions. First, at the time the insurance program is created or insurance is offered, there is no discussion of the subsidy being provided to those obtaining insurance, and second, there need not be an estimate of the likely budgetary impact over the insurance period. This means decisions about insurance programs are not made based on their likely cost to the federal government—nor is the amount of the subsidy ever recognized in the budget. Given our concerns that long-term costs of programs be understood and that programs or policies be considered on a comparable- cost basis, we recommended that the budget record the “missing premium” for insurance programs. Lastly, and perhaps of most interest given the focus of this hearing, the budget process should be enforceable, provide for control and accountability, and be transparent. These three elements are closely related and achieving one has implications for the others. Further, the way these are interpreted has implications for the design of any enforcement mechanism. By enforcement I mean not a mechanism to force a decision but rather a mechanism to enforce decisions once they are made. Accountability has at least two dimensions: accountability for the full costs of commitments that are to be made, and targeting enforcement to actions taken. It can also encompass the broader issue of taking responsibility for responding to unexpected events. For example, Congress and the President may want to consider periodically looking back and assessing the progress toward reducing the deficit. Such a process would be valuable because economic and technical factors driving direct spending program costs above anticipated levels have remained outside policymakers’ control. Finally, the process should be transparent, that is, understandable to those outside the process. I will turn now to the issue of enforcement. In considering any new enforcement mechanisms going forward, it is helpful to draw on the lessons learned from the past. Therefore, I will start with a brief history of budget enforcement mechanisms and a summary of the key lessons learned before turning to the design and implementation of budget enforcement mechanisms for today’s challenges. The process created in the Congressional Budget and Impoundment Act of 1974 Act was not created to produce a specific result in terms of the deficit. Rather, it sought to assert the Congress’s role in setting overall federal fiscal policy and establishing spending priorities and to impose a structure and a timetable on the budget debate. Underlying the1974 Act was the belief that Congress could become an equal player only if it—like the executive branch—could offer a single “budget statement” with an overall fiscal policy and an allocation across priorities. This was an important step. It was not until the Balanced Budget and Emergency Deficit Control Act of 1985—commonly known as Gramm-Rudman-Hollings or GRH—that the focus of the process changed from increasing congressional control over the budget to reducing the deficit. Both the original GRH and the 1987 amendments to it sought to achieve a balanced budget by establishing annual deficit targets to be enforced by automatic across-the-board “sequesters” if legislation failed to achieve the targets. GRH sought to hold Congress responsible for the deficit regardless of what drove the deficit. If the deficit grew because of the economy or demographics—factors not directly controllable by Congress—the sequester response dictated by GRH was the same as if the deficit grew because of congressional action or inaction. If a sequester was necessary, GRH did not differentiate between those programs where Congress had made cuts and those where there had been no cuts or even some increases. Finally, the timing of the annual “snapshot” determining the deficit and the size of the sequester and the fact that progress was measured 1 year at a time created a great incentive for achieving annual targets through short-term actions such as shifting the timing of outlays. GRH demonstrated that no process change can force agreement where one does not exist. However, the experiences gained led to the Budget Enforcement Act (BEA) of 1990. This act was designed to enforce substantive agreement on the discretionary caps and pay-as-you-go (PAYGO) neutrality reached by the President and Congress. BEA sought to influence the result by limiting congressional action. Unlike GRH, BEA held Congress accountable for what it could directly control through its actions, and not for the impact of the economy or demographics, which are beyond its direct control. BEA did this by dividing spending into two parts: PAYGO and discretionary. It imposed caps on the discretionary part that succeeded in holding down discretionary spending and through PAYGO it constrained congressional actions to create new entitlements (whether through direct spending or tax preferences) or tax cuts. What then do I believe we have learned from GRH and BEA? Enforcing an agreement is more successful than forcing an agreement. Covering the full range of federal programs and activities—rather than exempting large portions of the budget—can strengthen the effectiveness of the controls and enforcement. Targeting sequestration to those areas that exceed their agreed-upon level creates better incentives than punishing all areas of the budget if only one fails to achieve its deficit reduction goal. Focusing on a longer time horizon can help Congress find a sustainable fiscal path rather than artificially shifting costs into the future. Incorporating a provision under which Congress would periodically look back at progress toward reducing the deficit can prompt action to bring the deficit path closer to the original goal. Budget process helped once to achieve a goal that had consensus; it could work again. While BEA’s focus on actions offered advantages for enforcement, it did not go far enough to meet today’s needs. BEA specified that Congress must appropriate only so much money each year for discretionary programs and that any legislated changes in entitlements and/or taxes during a session of Congress were to be deficit-neutral. The effect of this control on discretionary programs and on entitlements was quite different. Spending for discretionary programs is controlled by the appropriations process. Congress provides budget authority and specifies a period of availability. Controlling legislative action is the same as controlling spending. The amount appropriated can be specified and measured against a cap. For mandatory programs and revenues, controlling legislative actions is not the same as controlling spending or revenues. For an entitlement program, spending in any given year is the result of the interaction between the formula that governs that program and demographics or services provided. Similarly, for a tax provision, the revenue impact is not directly determined by Congress. Under BEA legislated changes in entitlements and taxes were to be deficit-neutral over multiyear periods. However, BEA did not seek to control changes in direct spending or in revenues (including tax expenditures) that resulted from other sources— whether from changes in the economy, changes in population, or changes in costs. Moving forward this is a major gap: it is the underlying structure of the budget that is driving the long-term fiscal imbalance. BEA succeeded as far as its reach. It controlled discretionary spending and prevented legislative expansion of entitlement programs and new tax cuts unless they were offset. However, it did nothing to deal with expansions built into the design of mandatory programs and the allocation of resources within the discretionary budget. Congress enacted a return to a statutory PAYGO process in 2010. As with the previous iteration, this can help prevent further deterioration of the fiscal position, but it does not deal with the existing imbalance. The problem confronting us today requires going beyond the “do no harm” or “stop digging” framework of BEA. Going forward, the budget process will need to encourage savings in all areas of the budget and contain mechanisms for automatic actions (whether spending cuts, reductions in tax expenditures, or surcharges) if agreed-upon targets are not met. Caps on discretionary spending—and Congress’s compliance with the caps—are relatively easy to measure because discretionary spending totals flow directly from legislative actions (i.e., appropriations laws). However, there are other issues in the design of any new caps. For example, what categories should be established within or in lieu of an overall cap? Categories define the range of what is permissible. By design they limit trade-offs and so constrain both Congress and the President. As I previously discussed, a category could be established for investment spending. Such a category could help Congress focus on spending that promotes economic growth within a framework that still constrains overall spending. Should these caps be ceilings, or should they—as was the case for highways and violent crime reduction—provide for “guaranteed” levels of funding? Because caps are defined in specific dollar amounts, it is important to address the question of when and for what reasons the caps should be adjusted. Without some provision for emergencies, no cap regime can be successful. The design of any provision for emergencies can be important. How easy will it be to label something an “emergency?” If the emergency is something like a natural disaster, at what point should the related spending be incorporated into the regular budget process rather than remain an emergency exception? The regular budget and appropriations process provides for greater legislative deliberation, procedural hurdles, and funding trade-offs which may be bypassed through the use of emergency supplementals. If appropriations committee oversight and procedural controls over the enactment of supplementals—whether all spending is designated emergency or not—are less than that applied to the regular process, there may be an incentive to expand the use of supplementals. In the past we have recommended a number of steps to improve budgeting for emergencies—both in terms of how much is provided in the budget for yet unknown emergencies and in terms of procedures and mechanisms to ensure that emergency supplementals do not become the vehicle for other items. It is worth noting that discretionary spending caps leave the decision about how to comply with the caps to the committees of jurisdiction. Budget control legislation has set the level of the caps, but it has not specified how much should be spent on each department or activity under the cap. Unlike discretionary spending, mandatory spending programs and tax expenditures are not amenable to simple “caps.” Further, even if a cap on mandatory programs were to be designed and imposed, it would not deal with the underlying structure of these programs and hence would not address the longer-term growth trends. An alternative that would be more consistent with the design of these programs would be to set savings targets or specify a downward trend. Under the current budget process, if Congress wishes reductions in mandatory programs or increases in revenues, it may use reconciliation instructions to assign targets to the committees of jurisdiction; it does not generally direct those committees as to the specific nature of the change to meet such targets. While changing our long-term fiscal path requires looking down the road, we should start now. If Congress were to agree on a fiscal goal and set targets along a multiyear path, then enforcement would be tied to those targets and that path. The lessons of GRH and BEA could be applied: tie enforcement to actions. A look-back provision would create a mechanism to reconcile results with intent. The growth of some mandatory programs might be slowed by creating program-specific triggers which, when tripped, prompt a response. A trigger could result in a “hard” or automatic response, unless Congress and the President acted to override or alter it. By identifying significant increases in the spending path of a mandatory program relatively early and acting to constrain it, Congress and the President could avert larger financial challenges in the future. A similar approach might be applied to tax expenditures, which operate like mandatory programs but do not compete in the annual appropriations process. Since the growing deficit and debt is a function of the structural and growing imbalance between spending and revenues, we have said that both sides of the equation should be covered by whatever enforcement mechanism is selected. At the same time, the design of the mechanism must recognize the differences in design and hence in control of discretionary spending, mandatory spending, spending through the tax code in the form of tax expenditures, and revenues. As a general rule, incentives or penalties—which are what enforcement mechanisms often serve as—are most successful if they are plausible and tied to a failure to act rather than imposed too broadly. As I noted, we have said that enforcement is an important part of any budget process; in designing enforcement mechanisms it is important to pay attention not only to their interaction with the design of different parts of the budget but also to any perverse incentives or unintended consequences that are likely to result. Finally, I would like to comment about one measure that does not serve as an enforcement mechanism but is often misunderstood as one: the debt limit. The debt limit does not control or limit the ability of the federal government to run deficits or incur obligations. Debt reflects previously enacted tax and spending decisions. The debt limit, therefore, is a limit on the ability to pay obligations already legally incurred. If the level of debt— or debt as a share of GDP—is to serve as a fiscal policy goal or limit, then it must constrain the decisions that lead to debt increases when those decisions are made. Our recent work highlights some options for better linking spending and revenue decisions to the decisions about the debt limit at the time that those decisions are made. For example, many have suggested that since the Congress’s annual budget resolution reflects aggregate fiscal policy decisions including levels of federal debt, this would be the appropriate point in the budget process to make the necessary adjustments to the debt limit. If that were done, then Congress might also adopt a process whereby any legislation that would increase federal debt beyond that envisioned in the resolution would also contain a separate title raising the debt limit by the appropriate amount. Congress took this approach with three pieces of legislation enacted in 2008 and 2009: the Housing and Economic Recovery Act of 2008, the Emergency Economic Stabilization Act of 2008, and the American Recovery and Reinvestment Act of 2009 each included a separate provision increasing the debt limit. The budget process is the source of a great deal of frustration. The public finds it hard to understand. Members of Congress complain that it is time- consuming and duplicative, requiring frequent votes on the same thing. And, too often, the results are not what was expected or desired. It is inevitable that, given the nature of today’s budget challenge, there will be frustration. It is important, however, to try to separate frustration with process from frustration over policy. To change the fiscal path requires hard decisions about what government will and will not do and how it will be funded. A process may facilitate the debate, but it cannot make the decision. Enforcement mechanisms are not terribly successful in forcing actions when there is little agreement on those actions. Carefully designed mechanisms, however, can enforce agreements that have already been made and ensure compliance. Chairman Baucus, Senator Hatch, Members of the Committee, this concludes my statement. I am happy to answer any questions and provide any assistance as you move forward in this important endeavor. We conducted our work from April to May 2011 in accordance with all sections of GAO’s Quality Assurance Framework that are relevant to our objectives. The framework requires that we plan and perform the engagement to obtain sufficient and appropriate evidence to meet our stated objectives and to discuss any limitations in our work. We believe that the information and data obtained, and the analysis conducted, provide a reasonable basis for any findings and conclusions in this statement. For further information regarding this testimony, please contact Susan J. Irving, Director for Federal Budget Issues, Strategic Issues, on (202) 512- 6806 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include Carol Henn, James McTigue, and Thomas McCabe. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
As Congress considers the role and design of appropriate budget enforcement mechanisms in changing the government's fiscal path, this testimony outlines some elements that could facilitate debate and contribute to efforts to place the government on a more sustainable long-term fiscal path. Budgeting is the process by which we as a nation resolve the large number of often conflicting objectives that citizens seek to achieve through government action. The budget determines the fiscal policy stance of the government--that is, the relationship between spending and revenues. And it is through the budget process that Congress and the President reach agreement about the areas in which the federal government will be involved and in what way. Because these decisions are so important, we expect a great deal from our budget and budget process. We want the budget to be clear and understandable. We want the process to be simple--or at least not too complex. But at the same time we want a process that presents Congress and the American people with a framework to understand the significant choices and the information necessary to make the best-informed decisions about federal tax and spending policy. This is not easy. Since our first simulations in 1992, we have continued to report on the nature and drivers of the long-term imbalance and on mechanisms to help address the challenge. Focusing on the long term does not mean ignoring the near term. While concerns about the strength of the economy may argue for phasing in policy changes over time, the longer action to change the government's long-term fiscal path is delayed, the greater the risk that the eventual changes will be more disruptive and more destabilizing. Starting on the path to sustainability now offers many advantages. Our increased awareness of the dangers presented by the long-term fiscal outlook leads to a focus on enforcement provisions within the budget process that can facilitate the debate and contribute to efforts to put the government on a more sustainable long-term fiscal path. The budget process is the framework within which enforcement mechanisms exist. No process can force choices Congress and the President are unwilling to make. Having an agreed-upon goal justifies and frames the choices that must be made. A budget process can facilitate or hamper substantive decisions, but it cannot replace them. While no process can substitute for making the difficult choices, it can help structure the debate. The budget structure can make clear information necessary for important decisions or the structure can make some information harder to find. The process can highlight trade-offs and set rules for action. In our past work, we have identified four broad principles or criteria for a budget process that can help Congress consider the design and structure of future budget enforcement mechanisms. A process should 1. provide information about the long-term effect of decisions, both macro--linking fiscal policy to the long-term economic outlook--and micro--providing recognition of the long-term spending implications of government commitments, 2. provide information and be structured to focus on important trade-offs such as the trade-off between investment and consumption spending, 3. provide information necessary to make informed trade-offs between the different policy tools of government (such as tax provisions, grants, and credit programs), and 4. be enforceable, provide for control and accountability, and be transparent, using clear, consistent definitions. First, selecting the appropriate time horizon in which the budgetary impact of policy decisions should be measured is not just an abstract question for analysts. If the time horizon is too short, Congress may have insufficient information about the potential cost of a program. In addition, too short a time horizon may create incentives to artificially shift costs into the future rather than find a sustainable solution. Second, the structure and rules can determine the nature of the trade-offs surfaced during the budget process. Consumption may be favored over investment because the initial cost of an infrastructure project looks high in comparison to support for consumption. Distinguishing between support for current consumption and investing in economic growth in the budget would help eliminate a perceived bias against investments requiring large up-front spending. The third principle focuses on the method through which the federal government provides support for any federal goal or objective. The budget and budget process should provide the information necessary to permit looking across federal agencies and policy tools--which means across committee jurisdictions--to make an informed choice. Lastly, the budget process should be enforceable, provide for control and accountability, and be transparent.
Many of the health conditions that people age 65 and older experience are preventable and linked to specific health risks. Some health risks are difficult to change, and some, such as a hereditary predisposition for a given disease, cannot be changed. For these, preventive services such as cancer screens can help identify disease in its early stages so that people can be referred to other services that can help manage or treat the disease. Other health risks, such as complications from influenza, can be successfully reduced by targeted preventive services. For example, studies show that immunizations against influenza can prevent thousands of hospitalizations and deaths each year among those age 65 and older. Health risks such as high blood pressure and high cholesterol are also considered health conditions because, if left alone, they can develop into potentially more significant conditions, such as cardiovascular disease, or lead to stroke. The term preventive care covers a wide spectrum of actions aimed at reducing risks for deteriorating health and improving the detection and management of disease. Generally, preventive care is intended for three purposes: To prevent a health condition from occurring at all. Vaccinations and physical activity to reduce the risk of heart disease, for example, qualify as this first type of preventive care (termed primary prevention). To prevent or slow a condition’s progression to more significant health conditions by detecting a disease in its early stages. Mammograms to detect breast cancer and other screens to detect disease early are examples of this second type of preventive care (termed secondary prevention). To prevent or slow a condition’s progression to more significant health conditions by minimizing the consequences of a disease. Care coordination and self-management of an existing disease, such as diabetes or asthma, are examples of this third type of preventive care (termed tertiary prevention). Many people associate the idea of preventive care with annual physical examinations, or “routine checkups,” by a family doctor, a practice first proposed by the American Medical Association (AMA) in the early twentieth century. In the early 1980s, however, the AMA determined that appropriate preventive care depends on an individual’s age and particular health risks, not simply on the results of a standard battery of tests. To evaluate preventive care for different age and risk groups, HHS in 1984 established a panel of experts called the U.S. Preventive Services Task Force. At present, the task force recommends certain screening, immunization, and counseling services for people age 65 and older (see app. II). Medicare covers some, but not all, of the task force-recommended preventive services (see comparison in app. II). Medicare’s fee-for-service program—which comprises approximately 84 percent of Medicare beneficiaries—does not cover periodic checkups, where clinicians might assess an individual’s health risk and provide needed preventive services. These Medicare beneficiaries may, however, receive some of these services during office visits for other health problems. Under Medicare + Choice, which covers about 14 percent of Medicare beneficiaries, a benefit for periodic checkups generally does exist. Medicare beneficiaries typically visit a physician several times during a year and most receive some preventive services, but most do not receive the full range of recommended services. Based on 2000 survey data and U.S. Bureau of the Census estimates of people age 65 and older, we estimate that beneficiaries visit a physician at least six times a year, on average, mainly for illnesses or medical conditions. About 1 in 10 visits occurred when beneficiaries were well, and most Medicare beneficiaries reported having what they considered to be a “routine checkup” in the previous year. The purposes of these routine checkups and the specific services that are delivered during these visits, however, remain unknown. Many Medicare beneficiaries did not receive recommended preventive services, such as influenza and pneumonia immunizations. Moreover, another national survey indicated that a substantial share of Medicare beneficiaries who were at risk for a condition that preventive services are meant to identify said that they had not been told by a health professional that they might have that condition. In 2000, 88 percent of Medicare beneficiaries reported that they visited a physician at least once that year. On the basis of data from CDC’s National Ambulatory Medical Care Survey, we estimate that, on average, beneficiaries visit physicians at least six times a year. Almost 9 in 10 visits made by beneficiaries in the fee-for-service program were to treat illnesses or health conditions: more than half the visits targeted preexisting (chronic) problems, more than one-fourth targeted illnesses of sudden or recent onset (acute), and about 10 percent of visits took place pre- or postsurgery or to follow up after injuries. Only about 10 percent of visits dealt with nonillness care when the patient was considered healthy (see fig. 1). Even though the majority of visits to physicians are for treating illness or health conditions, most Medicare beneficiaries reported receiving routine checkups. In CDC’s 2000 Behavioral Risk Factor Surveillance System Survey, for example, 93 percent of respondents age 65 and older reported that they had received a “routine checkup” within the previous 2 years. This survey did not, however, provide information on which specific services were delivered during those checkups. Indeed, as the following section shows, few beneficiaries receive all recommended services, although they receive some preventive services during visits when they are healthy as well as during visits to treat illnesses or health conditions. Despite how often Medicare beneficiaries visit physicians, many of them do not receive a full complement of recommended preventive services, including some recommended by the U.S. Preventive Services Task Force and currently covered by Medicare. As we reported earlier, use of specific preventive services varies widely by service. Although each preventive service we reviewed was delivered to a majority of Medicare beneficiaries, relatively few beneficiaries received the full range of preventive services. For example, 91 percent of female Medicare beneficiaries received at least one preventive service, but only 10 percent were screened for cervical, breast, and colon cancer and also immunized against influenza and pneumonia. Our analysis of additional data since our previous report shows that many Medicare beneficiaries still do not receive certain recommended preventive services. The task force recommends, for example, that all people age 65 and older receive an annual influenza vaccination and at least one pneumonia vaccination. In CMS’s Medicare Current Beneficiary Survey of 2000, however, about 30 percent of Medicare beneficiaries did not receive an influenza vaccination, and 37 percent had never had a pneumonia vaccination. Survey data showing the services provided during office visits indicate that Medicare beneficiaries do receive some preventive services during visits when they are ill or being treated for a health condition, and services are delivered at comparable rates during all types of visits, whether for nonillness care or for treating acute or chronic conditions. Beneficiaries in the fee-for-service program receive preventive services, such as cholesterol and blood tests, during visits when they are healthy and during visits to treat acute or chronic health conditions. Some tests are typically provided or ordered slightly more often during visits for nonillness care. In 2000, for example, blood tests for anemia were provided in about 16 percent of visits for nonillness care, compared with 7 percent of visits for chronic problems and 5 percent of visits for acute conditions. Other preventive services were provided at similar rates during the different types of visits. For example, we estimate that blood pressure measurement, a clinical screen for conditions such as hypertension, was done during 56 to 62 percent of visits, depending on the type of visit. Diet counseling services were provided during 13 to 20 percent of visits, depending on the type of visit. Many Medicare beneficiaries may not know that they are at risk for health conditions that preventive care could detect—strong evidence that they may not be receiving the full range of recommended preventive services.For example, data from CDC’s NHANES for 1999–2000 show that, of beneficiaries participating in this nationally representative survey who had a physical examination and were found to have elevated blood pressure readings at the time of the examination, 32 percent reported that no physician or other health professional had ever told them about the condition. On the basis of this survey, we estimate that, during the period when the survey was conducted, 21 million Medicare beneficiaries may have been at risk for high blood pressure, and an estimated 6.6 million of them may have been unaware of this risk. Similarly, 32 percent of those found in the 1999–2000 survey to have a high cholesterol level reported that no one had told them that they had high cholesterol. Projected nationally, this percentage translates into 2.1 million Medicare beneficiaries (see fig. 2). The Medicare + Choice plans we reviewed vary in their specific strategies for delivering preventive services, but several common themes emerge from their efforts. First, nearly all identify members’ health risks and inform them or their providers about specific services that might be needed. For example, some plans mail questionnaires to members, seeking information, such as when certain screening tests were last performed; other plans review claims and prescription data to identify at-risk members who might need a screening test or other preventive service. Second, all plans have follow-up strategies to help beneficiaries obtain needed preventive services, although their strategies and priorities vary. Third, while limited data provided by some plans suggest promising results, most plans have not evaluated the degree to which their strategies improve health outcomes or affect health care costs for Medicare beneficiaries. Although all the Medicare + Choice plans we reviewed use questionnaires to meet the requirement that they conduct health assessments for newly enrolled Medicare beneficiaries, they use a combination of approaches to identify health risks. The particular risks that plans seek to identify vary from plan to plan. Risks include those associated with depression or lack of physical activity; risks from not obtaining recommended immunizations or screenings, such as mammography; and more general risk of short-term hospitalization or illness. For example, Group Health Cooperative, Highmark Blue Cross and Blue Shield, and Kaiser Permanente use questionnaire information to calculate a risk score meant to represent each enrollee’s probability of using health services heavily in the future. From its questionnaire, Kaiser Permanente also calculates the probability of 3-year survival for enrollees who have an existing advanced illness, as well as the probability that they will become dependent on others for daily care or need nursing home services during the next year (a condition Kaiser Permanente officials refer to as frailty). Oxford Health Plan, on the other hand, analyzes questionnaire data to assign enrollees a risk classification of high, moderate, or low and assigns patients to health management teams or programs appropriate for each risk level. For existing members, plans use slightly different approaches to identify health risks, including information from claims and pharmacy data, annual risk assessment questionnaires, physician visits, and computer systems (called registries) that indicate when patients require specific preventive services. The specific approaches vary from plan to plan. For instance, Group Health Cooperative officials reported that they review the health risks, such as the immunization status, of their existing members through health maintenance visits, which they encourage Medicare beneficiaries to have every 2 years. During this visit, the provider reviews responses to a completed questionnaire that each patient is asked to bring to the visit and updates computer registry data, compiled from previous risk assessment questionnaires and physician visits. AvMed conducts a health risk assessment for each of its Medicare members and also uses claims and pharmacy data to identify members with specific diseases, so as to target preventive services. For example, using pharmacy and claims data to identify people with diabetes, AvMed invites these members to a health fair featuring services to prevent further progression of the disease. Paying a single copayment to attend the health fair, members can receive a number of services, such as a blood draw for laboratory work and vision and glaucoma screening. Finally, some plans report that they have increased the use of specific preventive services through their participation in CMS-required national performance improvement projects. For example, Highmark reported that in 2002 the plan used medical claims data to identify female Medicare beneficiaries who had not received a mammogram within the past 2 years and notified the beneficiaries and their physicians. As a result, the officials reported that 60 percent of contacted beneficiaries went on to receive mammograms. After identifying the health risks of Medicare beneficiaries—whether new enrollees or existing members—plans we contacted reported that they also make efforts to follow up on that information by providing feedback to enrollees about risks and referring them to specific, risk-related preventive services. For example, all plans have approaches to prevent disease progression for individuals identified as having chronic health conditions. The plans sometimes differ in their types of follow-up and in their emphasis on different types of preventive services. Some plans we reviewed, for example, stress primary prevention activities, such as exercise programs for all members, to a greater degree than others. To provide feedback, many plans contact members directly through letters or phone calls, encourage contact with primary care physicians, or combine written or oral feedback with follow-up physician examinations (see table 1). Using data available on computer registry, health professionals can review specific health risks with members. Health professionals also monitor the computer registry to track services members use. For new enrollees, physicians review a summary report and provide feedback during an initial office visit. In San Diego, existing members who visit health assessment centers receive a letter, based on a completed questionnaire and tests estimating “health age,” that discusses ways of decreasing specific health risks, and they receive a second visit for a complete exam. Various departments receive health risk reports based on risk assessment questionnaires. Reports for high-risk members go to teams of registered nurses, who contact the members and their primary care physicians to coordinate care. Plan sends results of health risk assessment to physicians to facilitate discussion with patients. Members with risks related to smoking, heart disease, or osteoporosis receive letters. New members identified as at risk for being frail are referred to case managers, and members identified with chronic disease are referred to a condition management program for targeted interventions. Physicians receive health risk information from risk assessment questionnaires and pharmacy and claims data. Members identified as having specific risks are contacted directly by the plan if health promotion or disease management programs are available for them. In addition to educating members about their health risks, some plans also link members to specific preventive services to reduce or mitigate these risks. For example, plans may send targeted health promotion materials; offer 24-hour telephone access to a nurse to discuss health concerns; or offer access to fitness programs, nutrition courses, immunizations, exams, and disease management or care coordination programs. These care coordination programs resolve health care issues through various means, such as in-depth telephone evaluations, communication with primary care physicians, in-home visits, or connections with community resources like Meals on Wheels. To refer Medicare members to preventive services, one plan we contacted emphasized directing them to primary prevention services, such as physical activity programs, while another plan emphasized connecting members to tertiary prevention services, such as disease management programs. For example, identifying physical activity and social isolation as two important predictors of overall health outcomes for seniors, Group Health Cooperative refers Medicare members to physical activity benefits and other primary prevention services. In contrast, acknowledging that most individuals age 65 or older have more than one chronic health condition, AvMed focuses more on identifying members with existing conditions and referring them to preventive services that can mitigate the condition. AvMed has created eight disease management programs covering conditions such as congestive heart failure, asthma, and diabetes. The goal is to provide members having these conditions with a series of condition-specific care interventions. For example, interventions for AvMed enrollees in the congestive heart failure program include prescribing specific drugs (such as ACE inhibitors, diuretics, and beta- blockers), providing self-directed care plans, and monitoring weight. Some plans described how they track the success of their efforts to provide people with specific preventive care interventions. Highmark, for example, offers financial incentives to physicians who follow specific clinical guidelines for a given condition. The plan also gives physicians quarterly report cards, generated by a computer registry, that indicate whether their patients have received all the care recommended by the management programs in which the patients are enrolled. AvMed, on the other hand, tracks the number of members identified as eligible for specific disease management programs, whether the program was offered to all eligible members, and the number who enrolled. AvMed also reported setting, monitoring, and reporting on performance goals for the percentage of members receiving specific care interventions. For example, for enrollees in the congestive heart failure management program, AvMed tracks the percentage receiving an ACE inhibitor drug. Few of the health plans we contacted had specifically evaluated whether their approaches to risk identification and reduction lead either to improved health outcomes for Medicare beneficiaries or to cost savings for the plan. From those plans that have such information, the available data suggest that offering disease management programs to people who have existing health conditions may hold promise, but most plans lacked evidence from controlled studies of a specific benefit to their Medicare members. AvMed and Oxford are among the plans that have evaluated whether their approach improves health outcomes and saves money. For example, AvMed plan officials observed that, in all AvMed plans, including its Medicare + Choice plan, AvMed members with existing chronic conditions spent fewer days in the hospital during the same period when more of their members with existing conditions were enrolled in disease management programs. According to AvMed officials, between 2001 and 2002, shorter hospital stays of Medicare congestive heart failure patients led to total savings of $1 million, and shorter hospital stays of asthma patients from all plans (not limited to Medicare beneficiaries) led to savings of $400,000. Similarly, Oxford has estimated savings attributed to various interventions, such as a mean savings of $219 per member per month from Medicare beneficiaries who voluntarily participated in a self- management workshop for diabetes, as compared with a random group of diabetic members who did not attend the workshop. Although these findings show potential to improve health and decrease costs, it is unclear from this information whether the decreased length of hospitalization and cost savings resulted from disease management or from other factors. It is also not clear what the long-term effects may be on Medicare beneficiaries and whether these observations would also apply to beneficiaries in a fee- for-service environment. Some plans are evaluating specific aspects of their approaches as a first step in determining which approaches are effective. For example, Kaiser Permanente officials provided data demonstrating their ability to identify a certain type of health risk among Medicare beneficiaries, but they did not provide data demonstrating that their overall approaches to risk identification or risk reduction resulted in improved health outcomes or cost savings. Specifically, they found that three questions on the risk assessment questionnaire, along with the patient’s age, predicted with a high degree of accuracy whether a person would need daily assistance from another person during the following year. Kaiser identified these people as at risk for frailty and through additional study found that, over the next decade, frail people spent more days in nursing homes than individuals who were not frail. Kaiser Permanente officials told us that they have not identified interventions that decrease or prevent frailty from developing but were instead focusing on identifying interventions to improve outcomes for those people once they were identified as frail. In addition to reviewing the efforts of contacted Medicare + Choice plans, we reviewed several studies that evaluated the effectiveness of employer- sponsored approaches to providing preventive services, such as health risk assessment and feedback, to both employees and retirees. Although these studies conclude that employer-sponsored approaches hold promise in terms of increasing preventive services, improving health outcomes, and lowering cost, we found the results limited in how they might be generalized to all Medicare beneficiaries. For example, General Motors evaluated its companywide prevention program, which offered health risk assessments, individualized health profiles, a quarterly newsletter, a self- care book, and a toll-free health information line. The company reported that providing risk assessment and feedback helped participants lower their health risk status and that nearly half of this benefit was realized within the first of 5 years. Although General Motors provides a similar risk appraisal program to retirees, this study did not include them, so the study’s finding cannot be generalized to the Medicare population. Several options have been suggested for improving the provision of preventive services within Medicare’s fee-for-service program. They include adding a new benefit for a nonillness-related examination, either a one-time “welcome-to-Medicare” examination for new beneficiaries or an examination available to all beneficiaries on a periodic basis. Although covering a one-time or periodic nonillness examination could be easily administered and could increase the receipt of some preventive services, doing so could also increase Medicare costs without necessarily ensuring that beneficiaries receive the full range of preventive services. CMS has tested similar options in the past and found that they produced mixed results. It is now examining an alternative that would essentially create a different structure using nonphysician providers to assess health risks and connect individuals with preventive services. The design work will be completed at the end of 2003, and if the decision is made to conduct a demonstration, results would not be available for several years after that. Additional demonstrations also under way—such as one exploring effective smoking cessation approaches and one giving physicians incentives to coordinate and manage the overall health care needs of beneficiaries—may provide additional insights into coordinating and delivering appropriate preventive services within the Medicare fee-for- service program. A one-time “welcome-to-Medicare” examination for new beneficiaries has been proposed as a means to better ensure that health care providers have enough time to identify individual Medicare beneficiaries’ health risks and provide preventive services appropriate for their risks. Proponents assert that a one-time benefit could combine a health evaluation with screenings and immunizations, along with counseling about health promotion and disease prevention. It could also orient new beneficiaries to Medicare and encourage them to make informed choices about providers and plans. Health risk assessment and behavior counseling could be provided by a range of nonphysician professionals, including nurses, counselors, and dietitians. A similar option would have Medicare cover an annual or periodic preventive visit available to all fee-for-service beneficiaries. In theory, many of the advantages of a one-time preventive visit would also apply to periodic examinations. For instance, dedicated preventive visits might provide greater opportunities for health care providers to assess and address health risks. Some evidence also suggests that a periodic health examination may increase use of preventive cancer screening and counseling services. For example, a National Cancer Institute-supported study surveyed general internists and family physician practices and their patients in 1992 and found that patients who had received a periodic health examination within the previous year were substantially more likely to have received appropriate cancer screening and counseling. While these options have benefits, they also have potential drawbacks. Adding a benefit for a one-time or periodic examination to the Medicare fee-for-service package could increase the program’s costs without necessarily ensuring that beneficiaries receive the full range of preventive services. The Congressional Budget Office in June 2002 estimated that a one-time physical examination benefit for new enrollees could cost as much as $1.6 billion over the 2003–2012 period. According to a Congressional Budget Office official, the agency has not recently estimated the potential costs of a Medicare benefit for examinations provided on a periodic basis. This cost, however, would likely be substantially higher than that of a one-time visit for new beneficiaries. At the same time, establishing such a benefit would not necessarily ensure delivery of the full range of preventive services. In addition, primary care physicians typically cannot provide services such as mammography screenings for breast cancer and colonoscopies for colon cancer, because these services usually require specialists. It also remains uncertain whether covering a one-time or periodic examination would be an effective means of improving beneficiary health outcomes. A previous CMS initiative that included preventive health care visits ended with mixed results. In the late 1980s and early 1990s, the agency conducted a congressionally mandated demonstration to test varied health promotion and disease prevention services, such as free preventive visits, health risk assessment, and behavior counseling, to see if they would increase use of preventive services, improve health outcomes, and lower health care expenditures for Medicare beneficiaries. The agency’s final report, published in 1998, concluded that the demonstration services were marginally effective in raising the use of some simple disease prevention measures, such as immunizations and cancer screenings, but did not consistently improve beneficiary health outcomes or reduce the use of hospital and skilled nursing services. CMS is exploring one alternative for Medicare preventive care that would provide systematic health risk assessments to fee-for-service beneficiaries through a means other than physician visits. In the late 1990s, the agency commissioned the RAND Corporation to evaluate the potential effectiveness of health risk assessment programs. Similar to the approaches taken by the Medicare + Choice plans we reviewed, such programs collect information from individuals; identify their risk factors; and refer the individuals to at least one intervention to promote health, sustain function, or prevent disease. The study concluded that health risk assessment programs have increased beneficial behavior (particularly exercise) and improved physiological variables (particularly diastolic blood pressure and weight) and general health status. It also concluded that more research would help clarify the programs’ effects on preventive services such as clinical screening. In addition, the study stated that to be effective, risk assessment questionnaires must be coupled with follow-up interventions such as referrals to appropriate services. The study found limited but encouraging evidence on the effectiveness of health risk assessment programs but concluded that the evidence was insufficient to accurately estimate the programs’ cost-effectiveness. The study recommended that CMS conduct a demonstration to test cost- effectiveness and other aspects of the health risk assessment approach for Medicare beneficiaries. Following up on the study’s findings, CMS has begun designing a fee-for- service-focused demonstration project, called the Medicare Senior Risk Reduction Program, to identify health risks and follow up with preventive services provided by means other than physician visits. The program will use a beneficiary-focused health risk assessment questionnaire to assess health risks, such as lifestyle behaviors, and use of clinical preventive and screening services. Because the demonstration is still in its design phase, the particular set of risk factors to be included is not yet final. Risk factors that might be addressed include preventable accidents such as falls, lack of exercise, high blood pressure, obesity, and use of preventive services. The Medicare Senior Risk Reduction Program will test different approaches to administering health risk assessments, creating feedback reports, and providing follow-up services, such as referring beneficiaries to health-promoting community services including physical activity and social support groups. According to project researchers, the program will tailor preventive interventions to individual risks; track patient risks and health over time; and provide beneficiaries with self-management tools and information, health behavior advice, and end-of-life counseling where appropriate. The design phase is scheduled for completion in late 2003, when CMS will decide whether to conduct a full demonstration.According to CMS officials, the potential demonstration’s final cost was uncertain at the time our report was completed. CMS is spending approximately $1 million on the developmental work. Unlike some health risk assessment programs, CMS’s program will be limited to questionnaires and follow-up contacts; it will not directly provide clinical screening such as blood pressure or cholesterol measurements. Instead, the program will concentrate on identifying, through information provided by the beneficiary, any modifiable lifestyle and behavioral risk factors and on referring beneficiaries to services for reducing those risks. CMS officials and researchers did indicate, however, that the program’s risk assessment tools will collect information on needed immunizations and cancer screenings and alert beneficiaries and their physicians to any needed services. CMS has other initiatives under way that may help improve the delivery of preventive services within the fee-for-service program. The first is the Medicare Stop Smoking Program, a smoking cessation demonstration project for fee-for-service beneficiaries. Recognizing that smoking is the single most preventable cause of disease and death in the United States, posing a significant health risk to the aged, CMS launched the demonstration to identify the most effective service to help beneficiaries stop smoking. The demonstration will evaluate the effectiveness of different smoking cessation services. The four services being tested are: (1) reimbursement for provider counseling, (2) reimbursement for provider counseling and for smoking cessation drugs or nicotine replacement therapy, (3) access to a telephone counseling quit-line plus reimbursement for nicotine replacement therapy, and (4) provision of written information on smoking cessation. Seven states are participating in the demonstration: Alabama, Florida, Missouri, Ohio, Oklahoma, Nebraska, and Wyoming. The study will be completed in 2004, with the results published in 2005. CMS has budgeted approximately $14 million for this project. CMS is also developing a physician group-practice demonstration that was required by the Medicare, Medicaid, and SCHIP Benefits Improvement and Protection Act of 2000. The aim of this demonstration is to provide incentives for physicians to coordinate and manage the overall health care needs of Medicare fee-for-service beneficiaries, especially those with chronic health conditions. Under the 3-year demonstration, physician groups will be paid on a fee-for-service basis and may, in some circumstances, earn a bonus from savings achieved if the average Medicare expenditure for beneficiaries in their group of patients is below an established target. Up to six physician group practices will be selected to participate in the demonstration, which is expected to start during 2003. Under the mandate, the aggregate expenditures for this demonstration must be budget neutral. Any bonus payments made to physician groups must therefore be taken from savings produced by the participating organizations. Finally, a 4-year coordinated-care demonstration is currently under way at 16 sites. Authorized by the Balanced Budget Act of 1997, this demonstration examines private-sector best practices for coordinating the care of patients with complex chronic conditions. These conditions include congestive heart failure, other heart and lung diseases, liver diseases, diabetes, psychiatric disorders, Alzheimer’s disease or other dementia, and cancer. CMS is testing whether care coordination programs—such as those that develop a plan of care after a complete assessment of patient needs and offer patient education, health care service arrangements, and coordination with providers—can, without increasing program costs, improve the quality of care and reduce avoidable hospital admissions among Medicare beneficiaries with chronic diseases. The selected sites mix case management and disease management models in their practices; operate in urban and rural settings around the country; and include hospitals, retirement communities, and academic medical centers. CMS is required to formally evaluate the projects every 2 years after implementation and report to the Congress on its findings. HHS officially announced the selected sites in January 2001, and as of May 2003, the 16 sites had enrolled approximately 10,000 Medicare beneficiaries in the demonstration. CMS officials stated that the demonstration could eventually enroll more than 36,000 beneficiaries, although half of these will serve as a control group who will not receive coordinated care. CMS officials told us that they expect this demonstration to also be budget neutral. That is, they anticipate that overall costs to Medicare for providing the services will be offset by savings achieved from providing the care coordination services. Most Medicare beneficiaries receive some preventive services, but many do not receive services that can help prevent and manage their health risks and conditions early, before significant health problems occur. Services recommended for all people in this age group are not delivered consistently. Perhaps of most concern, nearly one-third of beneficiaries who were screened and identified as having elevated blood pressure or high cholesterol measures in a nationally representative survey had not previously been told by their physicians or other health providers that they had these conditions. Projected nationally, the survey results translate into millions of people who could be unaware that they have a health condition whose treatment could prevent or delay much more significant health concerns. The solutions to ensure that beneficiaries receive needed services are not obvious. The experience of selected Medicare + Choice plans shows that no single approach stands out. All plans we contacted had a means to identify health risks, to provide feedback on risks to patients or their physicians, and to follow up with interventions to reduce those risks. But the follow-up programs, approaches, and priorities differed among the plans we contacted, and few had evaluated their approaches in a manner that would indicate whether these programs could, without significantly increasing costs, improve health outcomes for Medicare beneficiaries. Nevertheless, some current research shows promise for improving the delivery of preventive services—particularly when there are follow-up interventions, such as referrals to appropriate services. We obtained comments on our draft from HHS as well as from the health plans we contacted. HHS generally concurred with our findings and provided examples of CMS’s successes in promoting existing preventive services and in identifying strategies that might be used in future health promotion efforts. HHS also clarified the status of its program evaluating the use of individual health risk assessments, which is in development, and clarified its Medicare Stop Smoking Program, which will assess options for a new benefit for smoking cessation but not necessarily lead to CMS coverage for these benefits. HHS emphasized that only the Congress can decide which preventive services or benefits Medicare covers. HHS also updated its estimate of this program’s budget. We incorporated these clarifications in the draft. HHS also commented that without sufficient evidence, the report links beneficiaries’ lack of knowledge that they may have certain conditions, such as high blood pressure, with evidence that they are not receiving the full range of preventive services. We did not intend to link these statements, but we have independent evidence for each of them and have added information to our summary of results to help clarify this evidence. HHS’s comments are reproduced in appendix IV. HHS and the health plans also provided technical comments that we considered and incorporated where appropriate. As arranged with your office, unless you release its contents earlier, we plan no further distribution of this report until 30 days after its issue date. We are sending copies of this report to the Secretary of HHS, the Administrator of CMS, the Director of CDC, and others who are interested. We will make copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions, please contact me at (202) 512-7119 or Katherine Iritani, Assistant Director, at (206) 287-4820. Other individuals who made contributions to this report include Matthew Byer, Sophia Ku, and Tina Schwien. Because no single source contained all the information we needed to assess the extent to which Medicare beneficiaries receive preventive services through existing physician visits, we used data from four national health surveys: three conducted by the Centers for Disease Control and Prevention (CDC) and one conducted by the Centers for Medicare & Medicaid Services (CMS) (see table 2). For example, CMS’s Medicare Current Beneficiary Survey samples Medicare beneficiaries, asking them for detailed information on their demographic characteristics, insurance coverage, and health status but asking only a few questions about specific preventive services received during physician visits. In contrast, CDC’s National Ambulatory Medical Care Survey samples physicians about office visits, rather than the people who made those visits. The survey contains information about reasons for office visits and about diagnostic and preventive services provided during visits, but it cannot be used to determine the extent to which Medicare beneficiaries received these services. For our analyses of these surveys, we extracted data for people age 65 and older to represent Medicare beneficiaries, because almost 95 percent of the population in this age group was enrolled in Medicare in 2000. Also, because the National Ambulatory Medical Care Survey samples office visits to physicians, not the people who made the visits, to estimate the average number of physician visits made by Medicare beneficiaries, we first estimated the number of visits made by patients age 65 and older using this database, and then divided this number by the U.S. Bureau of the Census estimates of the civilian noninstitutionalized population age 65 and older. To determine the major reasons for physician visits and the specific types of preventive services provided to Medicare beneficiaries in the fee-for-service program, we used visit data in this survey for patients age 65 and older who did not belong to a health maintenance organization and whose visits were not paid on a capitated basis. Tables 3 to 5 show the estimates and standard errors in data from the National Ambulatory Medical Care Survey 2000 on major reasons for physician visits and on the preventive diet counseling services provided during those visits. We also tested at the 95 percent confidence level the statistical significance of differences we observed between nonillness and other types of visits in the proportion of visits where preventive screening tests (e.g., cholesterol and blood tests) were provided. To estimate the proportion of Medicare beneficiaries who had health conditions that they were not previously aware of—specifically, high blood pressure or high cholesterol—we used data from both the interview and the physical examination portions of CDC’s National Health and Nutrition Examination Survey (see app. III for methodology and results from this analysis). To describe the preventive care approaches of Medicare + Choice plans, we consulted with national experts and officials from the American Association of Health Plans and chose five plans considered to have innovative preventive care programs. Together, these five plans serve more than 1.2 million Medicare beneficiaries in 15 states and the District of Columbia (see table 6). We interviewed officials from each plan and reviewed documents, including plan-provided studies or evaluations of their preventive services programs. We reviewed the scope and methodology of the studies done by some of the plans, but we did not independently verify the accuracy of the data. To examine the alternatives for identifying and reducing health risks and CMS’s efforts in exploring them, we reviewed available literature, including results of past demonstrations and congressionally mandated studies, and interviewed experts in the field, including those conducting studies and developing position papers for the Partnership for Prevention, a nonprofit organization funded by the Robert Wood Johnson Foundation. We also interviewed Department of Health and Human Services and CMS officials and reviewed documents on planned and present CMS demonstrations related to preventive services. Recommends (women only) N/A The costs of the laboratory test portion of these services are not subject to copayment or deductible. The beneficiary is subject to a deductible, copayment, or both for physician services only. Conducted by the Centers for Disease Control and Prevention’s (CDC) National Center for Health Statistics, the National Health and Nutrition Examination Survey (NHANES) is a nationwide population-based survey designed to estimate the health and nutrition of the noninstitutionalized U.S. civilian population. Our analysis was based on data gathered during NHANES 1999–2000, which represent the most recent information available. This survey comprises two parts: an in-home interview and a health examination. During the in-home interview, participants are asked about their health status, disease history, and diet; during the health examination, participants receive a number of tests, including blood pressure readings and a blood test to determine total serum cholesterol.Details of the survey design, questionnaires, and examination components are available at http://www.cdc.gov/nchs/nhanes.htm. For our analysis, we used the NHANES data described in table 7 to determine if participants age 65 and older had high blood pressure or high total serum cholesterol. We used the same criteria for these conditions as CDC and the National Heart Blood and Lung Institute use to estimate the conditions’ prevalence. To determine whether the participants age 65 and older found by examination to have elevated measures of these health conditions were previously unaware of having them, we used patients’ responses from the NHANES interview. During the interview, participants were asked if they had ever been told by a physician or health professional that they had certain conditions, including high blood pressure and high cholesterol. Tables 8 and 9 show the estimates and standard errors from 1999–2000 NHANES data for specific health conditions and level of awareness among participants age 65 and older. Estimated numbers, proportions, and standard errors were obtained using SUDAAN, a computer program for analyzing data from complex sample surveys, as suggested in the NHANES Analytic Guidelines.
Medicare, the federal health program insuring almost 35 million beneficiaries age 65 and older, covers certain preventive services, such as flu shots and mammograms. Most beneficiaries receive care through Medicare's fee-for-service program, under which they generally receive these services as part of visits to the doctor for specific illnesses or conditions. Other beneficiaries receive services under Medicare's managed care program, called Medicare + Choice. GAO was asked to determine (1) the extent to which beneficiaries received recommended preventive services through existing visits, (2) whether approaches used by Medicare + Choice plans provide insight for improving delivery of preventive care services for fee-for-service beneficiaries, and (3) what the Centers for Medicare & Medicaid Services (CMS) is doing to explore suggested options for delivering preventive care to fee-for-service beneficiaries. GAO's work included analyzing data from four national health surveys and reviewing five Medicare + Choice plans considered to have innovative approaches to delivering preventive services. GAO also interviewed Department of Health and Human Services (HHS) and CMS officials and reviewed documents on CMS demonstrations related to preventive services. Most Medicare beneficiaries receive some preventive services through their visits to physicians, but relatively few receive the full range of preventive services available. Survey data showed, for example, that in 2000 about 30 percent of beneficiaries did not receive a flu shot, and 37 percent had never been vaccinated against pneumonia. Moreover, many Medicare beneficiaries are apparently unaware that they may have conditions that preventive services are meant to detect. For example, in a 1999-2000 nationally representative survey during which people received physical examinations, nearly one-third of those age 65 and older who were found to have high cholesterol measurements said they had not previously been told by a physician or other health professional that they had high cholesterol. Projected nationally, this percentage could represent 2.1 million people. No clear "best practice" approach to delivering preventive care stands out among the innovative Medicare + Choice plans GAO studied. All five plans identify health risks, provide feedback on risks to patients or their physicians, and follow up to reduce those risks. But their follow-up programs, approaches, and priorities differ, and little is known about the effectiveness of these efforts for the Medicare-age population. CMS has begun the development work to design a project evaluating the use of individual assessments of health risks, followed by counseling and other services, as a way to improve preventive care delivery. Another suggested approach--adding a routine physical examination benefit to Medicare's fee-for-service program--could provide more opportunities, but at increased cost and without guarantee that preventive services would actually be provided to Medicare beneficiaries.
DOD annually spends about $15 billion for depot maintenance work that includes repairing, overhauling, modifying, and upgrading aircraft, ships, tracked and wheeled vehicles, and other systems and equipment. It also includes limited manufacture of parts, technical support, modifications, testing, and reclamation as well as software maintenance. DOD estimates that about 60 percent of its expenditures for depot maintenance work is performed in its 24 maintenance depots and the remaining 40 percent in the private sector. We have reported that the public-private mix is closer to 50-50 when it includes interim contractor support services and public depot purchases of parts, supplies, and maintenance services from the private sector. Historically public depots have served to provide a ready and controlled source of repair and maintenance. Reductions in military force structure and related weapon system procurement, changes in military operational requirements due to the end of the Cold War, and increased reliability, maintainability, and durability of military systems have decreased the need for depot-level maintenance support. Efforts to downsize and reshape DOD’s maintenance system have addressed depot efficiency and the workload mix between the public and private sectors. A key issue currently being debated within Congress and DOD is the extent to which the private sector should be relied on for meeting DOD’s requirements for depot-level maintenance. Congress, in the National Defense Authorization Act for Fiscal Year 1994, established the Commission on Roles and Missions of the Armed Forces to (1) review the appropriateness of the current allocations of roles, missions, and functions among the armed forces; (2) evaluate and report on alternate allocations; and (3) make recommendations for changes in the current definition and distribution of those roles, missions, and functions. The Commission’s May 24, 1995, report, Directions for Defense, identified a number of commercial activities performed by DOD that could be performed by the private sector. Depot-level maintenance was one of these activities. The Commission concluded that privatizing such commercial activities through meaningful competition was the primary path to more efficient support. It noted that such competition typically lowers costs by 20 percent. Based on its conclusions, the Commission recommended that DOD transition to a depot maintenance system relying on the private sector by, (1) directing support of all new systems to private contractors, (2) establishing a time-phased plan to privatize essentially all existing depot-level maintenance, and (3) creating an office under the Assistant Secretary of Defense (Economic Security) to oversee privatization of depots. In his August 24, 1995, letter to Congress forwarding the Commission report, the Secretary of Defense agreed with the Commission’s recommendations but expressed a need for DOD to retain a limited organic core capability to meet essential wartime surge demands, promote competition, and sustain institutional expertise. DOD’s January 1996 report, Plan for Increasing Depot Maintenance Privatization and Outsourcing, provides for substantially increasing reliance on the private sector for depot maintenance. The CORM, in support of its depot privatization savings assumption, cites reported savings from public-private competitions under OMB Circular A-76. These competitions were for various non-depot maintenance commercial activities, in which there was generally a highly competitive private market. Projected savings were greater for competitions having larger numbers of private sector competitors. The public sector won about half of these competitions. Our analysis indicates that private sector competition for depot maintenance may be much less than found in the A-76 activities. The data also suggests that little or no savings would result from privatizing depot maintenance in the absence of competition. The CORM report cites two studies supporting its savings assumption—one by OMB and one by the Center for Naval Analysis (CNA). Both reports are evaluations of numerous public-private competitions for commercial activities under OMB Circular A-76 guidelines. The commercial activities included base operating support functions such as family housing, real property maintenance, civilian personnel administration, food service, security, and other support services. These activities are characterized by highly competitive markets with low-skill labor, little capital investment, and simple, routine and repetitive tasks that can readily be identified in a contract statement-of-work. None of the competitions studied were for depot maintenance, which generally has dissimilar characteristics. Both reports show that substantial savings occurred when competition was introduced into the noncompetitive environment. However, the reported savings are based on the difference between the precompetition cost and the price proposed and do not reflect subsequent contract cost overruns, modifications, or add-ons. Based on a limited number of audits, projected A-76 privatization savings were often reduced or eliminated as a result of subsequent contract cost growth. The OMB study of commercial activities competed from 1981 to 1988 cited average savings of 30 percent from original government cost with an average 20-percent savings when the government won the competition and 35 percent when the private sector won. About 40 percent of competitions were won by government, 60 percent by the private sector. The CNA study cites a previous CNA review of the Navy’s Commercial Activities Program in which both the public and private sectors each won about half the roughly 1,000 competitions reviewed. The offers where the public sector won were roughly 20 percent lower than the precompetition cost baseline, whereas winning offers from private firms averaged 40 percent below earlier costs. The report noted that larger private sector savings occurred when activities were performed predominately by military personnel. Nearly all depot maintenance work is performed by DOD civilians. In 29 percent of the cost studies reviewed, there were no cost savings. These studies did not specifically address outsourcing to the private sector when the public sector did not participate in the competition. Since the government’s costs were lower in about half the cases, these savings would not have been realized without public competition. Further, in limited situations where audits have been conducted, projected savings have not been verified. For example, a 1989 Army Audit Agency report summarizing the results of prior commercial activities reviews stated that for 10 functions converted to contractor performance, only $9.9 million of $22 million in projected savings were realized. Performance work statement deficiencies, mandatory wage rate increases received by contractor personnel, and higher-than-estimated contract administration costs accounted for about 90 percent of the reduction in estimated savings. Our 1990 report on OMB Circular A-76 savings projections found (1) costs of conducting the competitions were not considered in estimating savings, (2) savings figures were projections and were not based on actual experience, (3) DOD lacked information regarding modifications made after the cost study, (4) DOD’s A-76 database contained inaccuracies and incomplete savings data, and (5) an error in design resulted in a computer program that miscalculated program savings. A July 1995 Congressional Budget Office report entitled Public and Private Roles in Maintaining Military Equipment at the Depot Level stated that contracting out was most likely to outperform public depots if competition existed among private firms. The report noted, however, that without competition, the private sector’s ability to provide service for the least cost could be reduced and the risk of poor-quality or nonresponsive support could increase. The CORM report also states that savings occur when meaningful competition is obtained in a previously sole-source area and public-private competitions are preferable to noncompetitive awards to the private sector. The CORM recognized that privatizing essentially all depot maintenance would require a time phased approach. Under current conditions, privatizing essentially all depot workloads (1) would not likely achieve expected savings and could prove more costly, (2) could adversely impact readiness, and (3) would be difficult if not impossible under existing laws. These conditions are discussed below. Limited competition and excess depot capacity could negate expected savings. The CORM assumed depot workload privatization savings would result from private sector competition. We found that much of the depot work contracted to the private sector is awarded noncompetitively and that obtaining competition for remaining non-core depot workloads may be difficult and costly. In addition, privatizing depot workloads without reducing excess depot capacity could significantly increase the cost of work performed by the depots. The CORM’s recommendation to privatize essentially all depot maintenance assumed that meaningful competition would be obtained for most of the work. The Commission generally defined meaningful competition as that generated by a competitive market, including significant numbers of both buyers and sellers. Our review of selected DOD depot maintenance contracts found that a large portion of the awards were not made under these conditions. To determine the extent of competition in awarding depot maintenance contracts, we reviewed 240 such contracts totaling $4.3 billion at 12 DOD buying activities. We selected high-dollar value contracts from a total of 8,452 open 1995 depot-level maintenance contracts that were valued at $7.3 billion. As shown in table 1, 182 of the 240 contracts—76 percent—were awarded on a sole-source basis. These contracts accounted for 45 percent of the total dollar value. In nine other contracts accounting for about 4 percent of the total, competition was limited to only two offerors. The remaining 49 contracts were classified as awarded through full and open competition. These awards accounted for 51 percent of the total dollar value. However, some had only limited responses. For example, the number of offerors was 2 in each of 5 contracts totaling $525.8 million—24 percent of the total award value for the 49 competed contracts. Original equipment manufacturers were awarded 158 of the 182 noncompetitive contracts. The remaining 24 were awarded on a sole-source basis for reasons such as peculiar requirements, national emergencies, and international agreements. Where competition was limited, the OEMs won eight of the nine workloads. The OEMs also won 9 of the 49 contracts that DOD classified as awarded pursuant to full and open competition. Table 2 shows the number of offers received for the contracts classified as awarded pursuant to full and open competition. The buying activities awarded the maintenance contracts to 71 different contractors but 13 of these contractors had received workloads valued at $3.3 billion—76 percent of the total amount awarded. Table 3 shows the distribution of the workload to the 71 contractors. Although DOD plans to privatize non-core workloads currently in the public depots, it has not assessed the extent that such workloads will attract private sector competition. Factors that resulted in noncompetitive awards for much of the depot work currently performed by the private sector, may apply to much of the work currently performed by public depots. The types of existing public workloads where private sector competition may be limited include: (1) workloads where data rights necessary for competition have not been acquired, (2) small workloads that do not justify large private sector capital investment costs, (3) workloads for older and/or highly specialized systems, (4) workloads with erratic requirements where DOD cannot guarantee a stable workload, and (5) workloads that would be costly to move from one source of repair to another. These factors could further limit cost-effective privatization of existing workloads. For example, our review of 95 non-ship depot maintenance public-private competitions found that 22 did not receive any private sector offers and 33 only had 1. DOD may have to acquire the technical data rights to compete many of its weapon systems. The most-often-cited justification for the 182 sole-source awards was that competition was not possible because DOD did not own the technical data rights for the items to be repaired. Command officials stated that DOD will have to make costly investments in order to promote full and open competition for many of its weapon systems. For example, in its justification for less than full and open competition for the repair and testing of the AN/URQ-33 Joint Tactical Information Distribution System, the Warner Robins Air Logistics Center noted that the technical data was not procured from the original equipment manufacturer and estimated that $1 million and a minimum of 6 months would be required to start up a new contractor. Similarly, the Army Missile Command’s justification for a sole-source maintenance and repair award to the original equipment manufacturer for the AH-58D Kiowa Warrior helicopter, noted that the program manager had not procured the technical data package due to funding and cost restraints. The command estimated that technical data suitable for full and open competition would cost about $18 million. The difficulty of accurately describing or quantifying depot maintenance requirements may impact privatization savings. Under fixed-price contracts, more of the risks are incurred by the contractor. If costs are greater than expected, then the contractor incurs the loss. The government incurs more risk under a cost reimbursable contract. Under such contracts, the government generally reimburses the contractor for the costs incurred. Accordingly, the contractor’s incentive to maximize efficiency and minimize cost is generally greater under a fixed-price contract. Cost reimbursable contracts are often used when contract requirements cannot be adequately described and/or costs accurately estimated. Such contracts are used for many depot maintenance workloads. Our analysis of the 240 contracts showed that the commands used fixed-price contracts in 151 (or 63 percent ) of the 240 contracts, cost-reimbursable type contracts in 61 contracts, and a combination of the 2 types in 28 contracts. Table 4 shows the types of contracts the commands were using to acquire depot-level maintenance. The buying activities said they used fixed pricing in the 151 contracts because adequate repair histories were available to establish a price range for the maintenance work. In using 61 cost-reimbursement type contracts, DOD officials stated that the maintenance requirements could not be predetermined for the contract period or that no adequate repair history existed to establish reasonable price ranges. Non-core workloads that may be good candidates for privatization—that is, a competitive private market exists—may not be cost-effective to privatize if it results in increased excess capacity and other inefficiencies in the public depots. Given the requirement to preserve public depot capabilities, DOD must manage depot maintenance workloads to assure efficient operations. In some cases where privatizing a particular workload could produce some level of savings, the savings could be more than offset by creating inefficiencies in the remaining public depots. For example, the Air Force’s Oklahoma City Air Logistics Center currently has about 43 percent excess capacity. Had DOD decided to reallocate the engine workload from the closing San Antonio Center to Oklahoma City instead of privatizing the workload in place, the labor hour rate for all of the Oklahoma City Center’s work would be reduced by $10 an hour. Such a reduction could save about $70 million a year. Our analysis of depot maintenance work currently contracted with the private sector found that contractors, for the most part, were responsive to DOD’s needs in terms of meeting contractual requirements for delivery and performance. However, service officials stated that historically, the flexibility and responsiveness of DOD depots had significantly influenced decisions to select a DOD depot rather than a contractor for most critical military systems. The military services have considered the readiness and sustainability risks of privatizing existing depot workloads and determined that the risks for privatizing most workloads were too high. In the past, these assessments provide the primary justification for maintaining a large organic depot maintenance core capability. DOD is implementing a new depot maintenance policy that is likely to significantly increase the depot maintenance workloads performed by the private sector. Based on the policy preference for contractor maintenance, DOD is now conducting risk assessments on workloads previously designated as core. In many cases, the services are redesignating mission essential core workloads as non-core. DOD’s March 1996 depot workload report to Congress, which reflects its latest “core” workload determinations, projects that the fiscal year 1997 depot workload mix of about 60 percent public and 40 percent private will shift to about a 50/50 mix by fiscal year 2001. However, these projections were not developed using the DOD’s new risk assessment process. We recently reported that DOD’s ongoing risk assessment process will likely result in an even greater shift of depot maintenance workload to the private sector. As required by the fiscal year 1996 Defense Authorization Act, we analyzed and reported on DOD’s March 1996 depot workload report. We noted that the DOD’s risk assessment process is based to a large extent on subjective judgements. Further, DOD’s methodology for assessing workload privatization risks does not include guidance or criteria for the services to use in making such assessments. As a result, the services individual risk assessments may not be consistent within the services or uniform among the services. The CORM report stated that DOD core depot requirements exceed the real needs of the national security strategy and that with proper oversight private contractors could provide essentially all of the depot-level maintenance services now conducted in government facilities. To evaluate contractor support and responsiveness for the workloads currently in the private sector, we analyzed contract modifications to 195 of the 240 contacts reviewed. We only found indications of contractor performance problems in four of these contracts. These involved extensions to the period of performance due to the contractors not meeting the required delivery dates. However, DOD materiel managers noted that DOD depots provide greater flexibility than contractors and can more quickly respond to nonprogrammed, quick-turnaround requirements. Further, DOD contracting personnel stated that contract files may or may not provide a reasonable assessment of readiness impacts. For example, these files would provide no indication of the impacts of cost growth on DOD’s ability to procure required depot maintenance services. In recommending that essentially all depot maintenance work be privatized, the Commission recognized that privatization could be limited or precluded by a collection of laws, regulations, and historic practices developed to protect the government’s depot maintenance capability. Among the barriers cited were 10 U.S.C. 2469, which requires public-private competitions before any workload over $3 million can be moved to the private sector from a public depot, and 10 U.S.C. 2466, which sets the amount of depot-level maintenance workload that must be performed in public depots to not less than 60 percent, that is, the 60/40 rule. Since the concept of core requirements centers around the determination of acceptable levels of risks, the size and extent of core capability and requirements can become somewhat subjective. Accordingly, the amount of depot work subject to privatization may be driven in part by the 60/40 rule. DOD is seeking repeal of these and other laws in order to fully implement its depot privatization plans. For example, in May 1996, DOD proposed a provision that would allow the Secretary of Defense to acquire by contract from the private sector or any nonfederal government entities those commercial or industrial type supplies and services necessary or beneficial to the accomplishment of DOD’s authorized functions, notwithstanding any provision of title 10 or any statute authorizing appropriation for or making DOD appropriations. This proposal was not supported by the DOD authorization committees during deliberations over the fiscal year 1997 DOD authorization bill. The CORM recognized that there are instances where establishing competition within the private sector would be too costly. In these cases, the Commission stated that public-private competition, however imperfect, was generally preferable to noncompetitive contracts. The CORM assumed, however, that there were only a few cases in which such competitions would be required. We found that requirements for and benefits of such competitions may be greater than assumed. As noted earlier in this report, most depot workloads currently contracted to the private sector are noncompetitive and obtaining private sector competition for those workloads currently in the public depots could prove difficult and costly. In examining DOD’s experience with public-private competition for depot-level maintenance, we found that the competitions generally resulted in savings, but precisely quantifying the savings is difficult because many other variables affect maintenance costs. We also found that some workloads are not well suited for competing—either private-private or public-private. DOD’s experience with public-private competition for depot-level maintenance began in 1985 when Congress authorized the Navy to compete shipyard workloads. In 1991, with DOD’s push to promote efficiency in depot maintenance operations and the Navy’s assertion that competition encouraged public shipyards to become more efficient, Congress permitted the Air Force and the Army to conduct public-private competitions for depot-level maintenance workloads. DOD had planned to use the program for allocating maintenance workloads to the most cost-efficient providers and to save $1.7 billion as part of its strategy to achieve an overall $6.3 billion reduction in depot maintenance costs from fiscal years 1991 to 1997. However, DOD suspended the program in May 1994 and reported to Congress in February 1995 that competition could not be reinstituted until its cost accounting and data systems permitted actual cost accounting for specific workloads. During our review of the Navy’s public-private competition program for aviation maintenance, Navy officials stated that such competitions had been beneficial to the government and resulted in maintenance savings for the involved workloads. They stated that competitions for workloads that had previously been assigned to Navy depots resulted in the Navy depots streamlining overhead, improving work processes, reducing labor and material requirements, and instituting other cost-saving initiatives in order to submit the lowest bids and avoid job losses. For example, the public-private competition for F-14 aircraft airframe overhauls—a competition won by a Navy depot—resulted in the depot reducing the average cost per overhaul from $1.69 million the year preceding the competition to $1.29 million, in inflation adjusted dollars, the year following the competition, a 24-percent decrease. A number of factors have limited DOD public-private competitions. They include: (1) private sector concerns regarding the fairness of competitions; (2) the time and cost of contract solicitation, award, and administration; (3) declining depot requirements and the inability to guarantee stable workloads; (4) lack of government-owned technical data packages; and (5) limited sources of repair, and low-dollar value workloads that generate little or no interest from the private sector. An April 1994 DOD task force report on depot-level activities identified several concerns with continuing public-private competitions. For example, efficiencies achieved would not be as likely in the future because the costs of conducting competitions were high and the payoffs would be progressively smaller as workloads were recompeted. Critics of public-private competitions charge that such competitions are inherently unfair because DOD’s accounting and financial management systems do not capture and reflect all the costs. In February 1995, DOD reported to the House and Senate Appropriations Committees that its automated financial management systems and databases did not provide an accurate basis for determining the actual cost of specific competition workloads. To remedy this situation, DOD was developing policies, procedures, and automated processes that would permit actual cost accounting for specific workloads accomplished in public depots. Our January 1996 report to the Ranking Minority Member, Subcommittee on Defense, Senate Committee on Appropriations, summarized many actions DOD had taken to improve public-private competitions. Among these actions were (1) the development of a cost comparability handbook that, among other things, identified adjustments that should be made to public depots’ offers as a result of differences in the military services’ accounting systems and (2) having the Defense Contract Audit Agency certify that successful offers included comparable estimates of all direct and indirect costs. We noted that the incentive to continue with some of the initiatives was lost after DOD terminated public-private competitions. We also identified additional actions that DOD could take to further improve competitions, for example, provide the Defense Contract Audit Agency the technical support needed to properly evaluate depot offers and to conduct an incurred cost audit to assess whether depots are able to perform work as offered. Our report also summarized the Navy’s suggestions for addressing concerns regarding public depot cost overruns and administration costs resulting from competitions. These included establish fixed prices for the competed work based on offer amounts, execute the work like normal workload using existing control systems with no separate contract administration, and assess penalties for cost overruns to make the depot less competitive in future competitions. DOD officials declined to comment on this report. They noted that the draft report we provided for comment included no recommendations and did not require a response. Further, the report addresses assumptions of the Commission on Roles and Missions of the Armed Forces, a group established by Congress that no longer exists. While the Commission on Roles and Missions was not a DOD entity, in forwarding the Commission’s report to Congress, the Secretary of Defense stated that DOD agreed with the Commission’s recommendation to outsource a significant portion of DOD’s depot maintenance work. Further, DOD’s January 1996 report on outsourcing depot maintenance cited the Commission’s savings projections as its rationale for its depot privatization initiative. Appendix I sets forth our scope and methodology. We will continue evaluating DOD’s actions on its plans to privatize depot-level maintenance to complete our response to issues raised by the National Security Committee. We are sending copies of this report to the Secretaries of Defense, the Army, the Navy, and the Air Force; the Director of the Office of Management and Budget; and interested congressional committees. Copies will be made available to others upon request. If you or your staff have any questions concerning this report, please contact me on (202) 512-8412. Major contributors to this report are listed in appendix II. The Chairman of the House Committee on National Security asked us to comment on the May 1995 report by the Commission on Roles and Missions of the Armed Forces that recommended the Department of Defense (DOD) privatize its depot-level maintenance activities. The Chairman requested that we review a number of issues related to the Commission’s report; this report provides information on the Commission’s assumptions that privatization could reduce maintenance costs by 20 percent and the potential impact of privatization on military readiness and sustainability. It also identifies some areas DOD may need to improve if it moves toward total privatization of depot-level maintenance. To evaluate the Commission’s assumptions about cost savings from privatization and the impact that it might have on readiness and sustainability, we reviewed its report, discussed the assumptions with former staff members of the Commission, and reviewed supporting data that the Commission had maintained. We made extensive use of our prior work and the work of others on issues related to DOD’s depot-level maintenance operations to determine how consistent the Commission’s work was with prior findings, conclusions, and recommendations. In addition, we analyzed selected depot-level contracts to evaluate (1) the extent to which DOD used competitive procedures in awarding the contracts and (2) how well the contractor performance responded to DOD’s depot-level maintenance needs. We performed our review at the following: Four Army buying activities: the Aviation and Troop Support Command (ATCOM), St. Louis, Missouri; the Communications-Electronics Command (CECOM), Fort Monmouth, New Jersey; the Missile Command (MICOM), Redstone Arsenal, Alabama; and the Tank-Automotive and Armaments Command (TACOM), Warren, Michigan. Five Air Force buying activities: Odgen Air Logistics Center (OO-ALC), Hill Air Force Base, Utah; Oklahoma City Air Logistics Center (OC-ALC), Tinker Air Force Base, Oklahoma; Sacramento Air Logistics Center (SM-ALC), McClellan Air Force Base, California; San Antonio Air Logistics Center (SA-ALC), Kelly Air Force Base, Texas; Warner Robins Air Logistics Center (WR-ALC), Robins Air Force Base, Georgia. Three Navy buying activities: the Naval Inventory Control Point (NICP), Mechanicsburg, Pennsylvania; Naval Inventory Control Point (NICP), Philadelphia, Pennsylvania; and Naval Air Systems Command (NAVAIR), Arlington, Virginia. DOD maintains a database on all contract awards that contains data on awards made by competition and awards that are made by other than competition. We did not use this database to evaluate DOD’s use of competitive procedures for depot-level maintenance because a test at one Army command showed coding errors and difficulty in identifying maintenance contracts. Therefore, we asked each buying activity to identify all depot-level maintenance contracts that were open at a given point during 1995 for use in evaluating the extent they had used competitive procedures and contractor performance. Each buying activity provided a list of contracts from their database. We did not attempt to verify the accuracy of the buying activities’ databases. The data contained a large number of small contracts. For timeliness, we chose to cover dollar value rather than numbers of contracts. We arranged the dollar value of the contracts from highest to lowest and selected high-dollar value contracts that would provide us at least 50-percent coverage of the total dollar value awarded by each service. Table I.1 presents the universe of contracts identified and our sample size. At the buying activities we visited, we reviewed the files of selected contracts to identify cost, schedule, and performance issues. We also discussed the contracting process and contractor performance with contracting officers, negotiators, and specialists. To identify contract types and contracting methods suitable for depot-level maintenance, we reviewed the Federal Acquisition Regulation and DOD supplements and talked to personnel from the Defense Contract Audit Agency and Defense Contract Management Command. We conducted our review between February 1995 and April 1996 in accordance with generally accepted government auditing standards. Julia C. Denman Karl J. Gustafson M. Glenn Knoepfle Frank T. Lawson John M. Ortiz Enemencio Sanchez Jacqueline E. Snead Edward A. Waytel James F. Wiggins Bobby R. Worrell Cleofas Zapata, Jr. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO examined the Commission on Roles and Missions' (CORM) privatization assumptions to determine whether privatization would adversely affect military readiness and sustainability. GAO found that: (1) the CORM's depot privatization savings and readiness assumptions are based on conditions that do not currently exist for many depot workloads; (2) privatizing essentially all depot maintenance under current conditions would not likely achieve expected savings and, according to the military services, would result in unacceptable readiness and sustainability risks; (3) the extent to which DOD's long-term privatization plans and market forces will effectively create more favorable conditions for outsourcing is uncertain; (4) the CORM assumed a highly competitive and capable private market exists or would develop for most depot workloads; (5) however, GAO found that most of the depot workloads contracted to the private sector are awarded noncompetitively, mostly to the original equipment manufacturer; (6) additionally, a number of factors would likely limit private sector competition for many workloads currently in the public depots; (7) the CORM data does not support its depot privatization savings assumption; (8) the CORM's assumption is based primarily on reported savings from public-private competitions for commercial activities under Office of Management and Budget (OMB) Circular A-76, but these commercial activities were generally dissimilar to depot maintenance activities because they involved relatively simple, routine, and repetitive tasks that did not generally require large capital investment or highly skilled and trained personnel; (9) GAO's analysis of depot maintenance workloads currently contracted to the private sector found, for the most part, that the contractors were responsive to contract requirements for delivery and performance; (10) however, DOD officials noted that DOD depots provide greater flexibility than contractors and can more quickly respond to nonprogrammed, quick-turnaround requirements; (11) the military services periodically assess the readiness and sustainability risks of privatizing depot workloads, and if the risks are determined to be too high, the workloads are retained in the public depots; (12) the CORM assumed that public-private competitions would only be used in the absence of private sector competition and would be limited to only a few cases; (13) public-private depot maintenance competitions have resulted in savings and benefits and can provide a cost-effective way of making depot workload allocation decisions for certain workloads; and (14) the beneficial use of such competitions could have significantly more applicability than the Commission assumed.
The Cayman Islands is a United Kingdom Overseas Territory located in the Caribbean Sea south of Cuba and northwest of Jamaica, with a total land area approximately 1.5 times the size of Washington, D.C., and a population of 47,862. While geographically small, the Cayman Islands is a major offshore financial center (OFC) with no direct taxes that attracts a high volume of U.S.-related financial activity, often involving institutions rather than individuals. Although not easily defined, OFCs are generally described as jurisdictions that have a high level of nonresident financial activity, and may have characteristics including low or no taxes, light and flexible regulation, and a high level of client confidentiality. The Cayman Islands reports that in 2008 it had 277 licensed banks, over 80,000 registered companies, more than 9,000 registered investment funds, and 760 captive insurance companies. According to the Department of the Treasury, U.S. investors held approximately $376 billion in Cayman-issued securities at the end of 2006, making it the fifth most common location for U.S. investment in foreign securities. As of September 2007, U.S. banking liabilities to the Cayman Islands were the highest of any foreign jurisdiction, at nearly $1.5 trillion. As of June 2007, U.S. banking claims on the Cayman Islands stood at $940 billion, second only to the United Kingdom. The international law firm of Maples and Calder, with its associated businesses - Maples Corporate Services Limited and Maples Finance Limited - is the sole occupant of Ugland House. Its business is to facilitate Cayman Islands-based international financial and commercial activity for a clientele of primarily international financial institutions, institutional investors, and corporations. Similar to corporate service providers in the United States, Maples Corporate Services Limited provides registered- office services, using Ugland House as a registered address, to entities it establishes. Registered-office services include activities such as maintenance of certain entity records and filing of statutory forms, resolutions, notices, returns, and fees. Cayman Islands law requires company-service providers like Maples and Calder to adhere to specific Anti-Money Laundering (AML) and Know-Your-Customer (KYC) requirements. For example, they must verify and keep records on the beneficial owners of entities to which they provide services, the purpose of the entities, and the sources of the funds involved. Very few Ugland House registered entities have a significant physical presence in the Cayman Islands or carry out business in the Cayman Islands. According to Maples and Calder partners, the persons establishing these entities are typically referred to Maples by counsel from outside the Cayman Islands, fund managers, and investment banks. As of March 2008 the Cayman Islands Registrar reported that 18,857 entities were registered at the Ugland House address. Approximately 96 percent of these entities were classified as exempted entities under Cayman Islands law, and were thus generally prohibited from carrying out domestic business within the Cayman Islands. Maples and Calder senior partners told us that approximately 5 percent of the entities registered at Ugland House were wholly owned by U.S. persons, while 40 to 50 percent were related to the U.S. in that they had a billing address in the United States. A U.S. billing address does not necessarily imply ownership or control. According to the partners, U.S. persons associated with Ugland House registered entities are often participants in investment and structured-finance activities, including those related to hedge funds and securitization. Entities associated with these activities are not generally directly owned or controlled. For instance, investment-fund entities are often established as partnerships and are essentially owned by the fund’s investors. Structured-finance entities are not typically carried on a company’s balance sheet, and ownership can be through a party other than the person directing the establishment of the entity, such as a charitable trust, or spread across many noteholders or investors, such as in deals involving securitization. The entities created by Maples and Calder that are directly owned or controlled include corporate subsidiaries, such as those used by multinational corporations to conduct international business. U.S. persons who conduct financial activity in the Cayman Islands commonly do so to gain business advantages, including tax advantages under U.S. law. Although such activity is typically legal, some persons have engaged in activity in the Cayman Islands, like other jurisdictions, in an attempt to avoid detection and prosecution of illegal activity by U.S. authorities. The Cayman Islands may attract U.S-related financial activity because of characteristics including its reputation for stability and compliance with international standards, its business-friendly regulatory environment, and its prominence as an international financial center. For instance, because the Cayman Islands’ legal and regulatory system is generally regarded as stable and compliant with international standards, U.S. persons looking for a safe jurisdiction in which to place funds and assets may choose to carry out financial transactions there. Additionally, establishing a Cayman Islands entity can be relatively inexpensive—an exempted company can be created for less than $600, not taking into account service-provider fees. Further, U.S. persons may also be attracted to the Cayman Islands because it is proximate to the United States, operates in the same time zone as New York, and is English speaking. Another frequent reason for doing business in the Cayman Islands is to obtain tax advantages, such as through reduction or deferral of U.S. taxes. For instance, U.S. tax-exempt entities, such as university endowments and pension funds, may invest in hedge funds organized in the Cayman Islands in order to avoid the unrelated business income tax (UBIT). The investment income of U.S. tax-exempts may be subject to UBIT if earned by an investment vehicle organized as a U.S. partnership, a formation common among U.S.-based hedge funds. However, tax-exempts that invest in hedge funds organized as foreign corporations in jurisdictions like the Cayman Islands can be paid in dividends, which are not subject to UBIT. Additionally, some U.S. persons may use Cayman Islands entities to defer U.S. taxes. For example, a U.S.-based multinational business may create a Cayman Islands subsidiary to hold foreign earnings, which are not generally taxed in the United States unless or until repatriated. Because the Cayman Islands, like some other OFCs, has no direct taxes, Cayman subsidiaries do not incur additional taxes owed to the Cayman Islands government. One indication of the extent to which U.S. companies use Cayman entities to defer taxes is their reaction to a recent tax law. In 2004, Congress approved a received dividend reduction for certain earnings of foreign subsidiaries of U.S. companies repatriated for a limited time. Approximately 5.5 percent of the nearly $362 billion repatriated between 2004 and 2006 came from Cayman Island controlled foreign corporations. Lastly, as with other offshore jurisdictions, some U.S. persons may establish entities in the Cayman Islands to illegally evade taxes or avoid detection and prosecution of illegal activities. The full extent of illegal offshore financial activity is unknown, but risk factors include limited transparency related to foreign transactions, and difficulties faced by U.S. regulators and the courts in successfully prosecuting foreign criminal activity. Voluntary compliance with U.S. tax obligations is substantially lower when income is not subject to withholding or third-party-reporting requirements. Because U.S.-related financial activity carried out in foreign jurisdictions is not subject to these requirements in many cases, persons who intend to evade U.S. taxes are better able to avoid detection. Persons intent on illegally evading U.S. taxes may be more likely to carry out financial activity in jurisdictions with no direct taxes, such as the Cayman Islands, because income associated with that activity will not be taxed within those jurisdictions. Individual U.S. taxpayers and corporations are generally required to self- report their taxable income to the Internal Revenue Service (IRS). Similarly, publicly owned corporations traded on U.S. markets are required to file annual or quarterly statements with the Securities and Exchange Commission (SEC). When an individual or corporation conducts business in the Cayman Islands, there is often no third-party reporting of transactions, so disclosures to IRS and U.S. regulators are dependent on the accuracy and completeness of the self-disclosure. Cayman Islands financial institutions are often not required to file reports with IRS concerning U.S. taxpayers. This makes it more likely that there would be inaccurate reporting by U.S. taxpayers on their annual tax returns and SEC required filings. In addition to the information that both IRS and SEC receive from filers of annual or quarterly reports, the U.S. government also has formal information-sharing mechanisms by which it can receive information from foreign governments and financial institutions. In November of 2001, the United States signed a Tax Information Exchange Agreement (TIEA) with the government of the United Kingdom with regard to the Cayman Islands. The TIEA provides a process for IRS to request specific information related to taxpayers. IRS sends TIEA requests to the Cayman Islands based on requests from inside the agency. In addition to the TIEA, the U.S. government and the Cayman Islands also entered into a Mutual Legal Assistance Treaty (MLAT) in 1986. The MLAT enables activities such as extraditions, searches and seizures, transfer of accused persons, and general criminal information exchange, including in relation to specified tax matters. Since the TIEA began to go into effect, IRS has made a small number of requests for information to the Cayman Islands. An IRS official told us that those requests have been for either bank records of taxpayers or for ownership records of corporations. The IRS official also told us that the Cayman Islands government has provided the requested information in a timely manner for all TIEA requests. Since the MLAT went into effect and through the end of 2007, the Department of Justice told us that the U.S. government has made over 200 requests for information regarding criminal cases to the Cayman Islands. Some financial intelligence information on U.S. persons’ Cayman activities is available to U.S. regulators. The U.S. government’s financial intelligence unit, FinCEN, works to gather information about suspected financial crimes both offshore and domestic. As part of its research and analysis, FinCEN can make requests of its counterpart in the Cayman Islands, the Cayman Islands Financial Reporting Authority (CAYFIN). CAYFIN can and does make requests to FinCEN as well. FinCEN and CAYFIN routinely share suspicious activity information. In fiscal year 2007 CAYFIN made 25 suspicious activity information requests to FinCEN to follow up on potential new as well as existing Cayman Islands-generated suspicious activity reports (SARs), while FinCEN made 6 requests to CAYFIN. According to CAYFIN, financial institutions primarily filed suspicious activity reports on U.S. persons for suspicion of fraud related offenses. Other offenses leading to the filing of suspicious activity reports included drug trafficking, money laundering, and securities fraud, which mostly consisted of insider trading. In addition to the formal information sharing codified into law between the U.S. government and Cayman Islands government and financial institutions represented by TIEA and MLAT requests and SARs, Cayman Islands officials reported sharing with and receiving information from federal agencies, state regulators, and financial institutions. To address the challenges posed by offshore illegal activity, IRS has targeted abusive transactions in areas related to transfer pricing, hedge funds, offshore credit cards, and promoters of offshore shelters. IRS officials said that some abusive transactions identified through these initiatives involved Cayman Islands entities, although the exact extent of this involvement was unclear because it does not maintain jurisdiction- specific statistics regarding abusive transactions. While the full extent of Cayman involvement in offshore illegal activity is unclear, U.S. officials were able to point to specific criminal investigations and prosecutions involving the Cayman Islands. Over the past five years IRS field agents have requested information regarding suspected criminal activity by U.S. persons in 45 instances pertaining to taxpayers or subjects in the Cayman Islands. We analyzed 21 criminal and civil cases to identify common characteristics of legal violations related to the Cayman Islands. Among these cases, the large majority involved individuals, small businesses, and promoters, rather than large multinational corporations. While they were most frequently related to tax evasion, other cases involved securities fraud, money laundering, and various other types of fraud. In most instances, Cayman Islands bank accounts had been used, and several cases involved Cayman Islands companies or credit-card accounts. IRS and Department of Justice (DOJ) officials stated that particular aspects of offshore activity present challenges related to oversight and enforcement. Specifically, these challenges include lack of jurisdictional authority to pursue information, difficulty in identifying beneficial owners due to the complexity of offshore financial transactions and relationships among entities, and lengthy processes involved with completing offshore examinations. Despite these challenges, U.S. officials consistently report that cooperation by the Cayman Islands government in enforcement matters has been good. Further, both the International Monetary Fund (IMF) and the Caribbean Financial Action Taskforce (CFATF) have cited the Cayman Islands for its efforts to comply with international standards, such as those related to anti-money-laundering and terrorist-financing activities. However, Cayman Islands government officials and senior partners from Maples and Calder stated that their role in helping the U.S. ensure compliance with U.S. tax laws is necessarily limited. Cayman Islands government officials stated that they cannot administer other nations’ tax laws and are not aware of any jurisdiction that undertakes such an obligation as a general matter. Senior partners from Maples and Calder stated that complying with U.S. tax obligations is the responsibility of the U.S. persons controlling the offshore entities, and that they require all U.S. clients to obtain onshore counsel regarding tax matters before they will act on their behalf. Cayman officials told us that until a request is made by the U.S. for tax-related assistance, the Cayman Islands government is “neutral” and does not act for or against U.S. tax interests. Ugland House provides an instructive case example of the tremendous challenges facing the U.S. tax system in an increasingly global economy. Although the Maples and Calder law firm provides services that even U.S. government-affiliated entities have found useful for international transactions and the Cayman Islands government has taken affirmative steps to meet international standards, the ability of U.S. persons to establish entities with relatively little expense in the Cayman Islands and similar jurisdictions facilitates both legal tax minimization and illegal tax evasion. Despite the Cayman Islands’ adherence to international standards and the international commerce benefits gained through U.S. activities in the Cayman Islands, Cayman entities nevertheless can be used to obscure legal ownership of assets and associated income and to exploit grey areas of U.S. tax law to minimize U.S. tax obligations. Further, while the Cayman Islands government has cooperated in sharing information through established channels, as long as the U.S. government is chiefly reliant on information gained from specific inquiries and self-reporting, the Cayman Islands and other similar jurisdictions will remain attractive locations for persons intent on legally minimizing their U.S. taxes and illegally avoiding their obligations. Balancing the need to ensure compliance with our tax and other laws while not harming U.S. business interests and also respecting the sovereignty of the Cayman Islands and similar jurisdictions undoubtedly will be a continuing challenge for our nation. Chairman Baucus, Senator Grassley, and members of the committee, this concludes my testimony. I would be happy to answer any questions you may have at this time. For further information regarding this testimony, please contact Michael Brostek, Director, Strategic Issues, on (202) 512-9110 or [email protected]. Contact points for our offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include David Lewis, Assistant Director; Perry Datwyler; S. Mike Davis; Robyn Howard; Brian James; Danielle Novak; Melanie Papasian; Ellen Phelps Ranen; Ellen Rominger; Jeffrey Schmerling; Shellee Soliday; Andrew Stephens; Jessica Thomsen; and Jonda VanPelt. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Cayman Islands is a major offshore financial center and the registered home of thousands of corporations and financial entities. Financial activity there is in the trillions of dollars annually. One Cayman building--Ugland House--has been the subject of public attention as the listed address of thousands of companies. To help Congress better understand the nature of U.S. persons' business activities in the Cayman Islands, GAO was asked to study (1) the nature and extent of U.S. persons' involvement with Ugland House registered entities and the nature of such business; (2) the reasons why U.S. persons conduct business in the Cayman Islands; (3) information available to the U.S. government regarding U.S. persons' Cayman activities; and (4) the U.S. government's compliance and enforcement efforts. GAO interviewed U.S. and Cayman government officials and representatives of the law firm housed in Ugland House, and reviewed relevant documents. The full report on GAO's review is GAO-08-778 , being released at the same time as this testimony. The sole occupant of Ugland House is Maples and Calder, a law firm and company-services provider that serves as registered office for the 18,857 entities it created as of March 2008, on behalf of a largely international clientele. According to Maples partners, about 5 percent of these entities were wholly U.S.-owned and 40 to 50 percent had a U.S. billing address. Ugland House registered entities are often participants in investment and structured-finance activities, including those related to hedge funds and securitization. Gaining business advantages, such as facilitating U.S.-foreign transactions or minimizing taxes, are key reasons for U.S. persons' financial activity in the Cayman Islands. The Cayman Islands' reputation as a stable, business-friendly regulatory environment also attracts business. This activity is typically legal, such as when pension funds and other U.S. tax-exempt entities invest in Cayman hedge funds to maximize their investment return by minimizing U.S. taxes. Nevertheless, as with other offshore jurisdictions, some U.S. persons may use Cayman Island entities to illegally evade income taxes or hide illegal activity. Information about U.S. persons' Cayman activities comes from self-reporting, international agreements, and less formal sharing with the Cayman government. Because there is often no third-party reporting, self-reported information may be vulnerable to being inaccurate or incomplete. U.S. officials said the Cayman government has been responsive to taxpayer-specific information requests. The Internal Revenue Service has several initiatives that target offshore tax evasion, including cases involving Cayman entities, but oversight and enforcement challenges related to offshore financial activity exist. U.S. officials said that cooperation with the Cayman Islands government has been good. Also, Maples partners said that ultimate responsibility for compliance with U.S. tax laws lies with U.S. taxpayers.
To determine the validity of DOD’s conclusion—that U.S. troops’ exposures to chemical warfare agents were as DOD estimates suggested— based on its plume-modeling analysis, we examined the meteorological and dispersion models DOD used to model chemical warfare agent releases from the U.S. demolition of Khamisiyah and Coalition bombings of Al Muthanna, Muhammadiyat, and other sites in Iraq during the Gulf War deployment period. We evaluated the basis for the technical and operational assumptions DOD made in (1) conducting the modeling for the bombing and demolition of Iraqi sites and (2) estimating the specific data and information used in the modeling, relating to source term, meteorological conditions, and other key parameters. We also evaluated the efforts of the CIA and DOD to collect and develop data on source term and other key parameters used in the modeling efforts. We interviewed DOD and CIA modelers and officials involved with the modeling and obtained documents and reports from DOD’s Deployment Health Support Directorate. We also interviewed and received documents from DOE officials who were involved with the modeling at LLNL. In addition, we interviewed officials and obtained documents from the Institute for Defense Analyses (IDA) concerning the IDA expert panel assessment of CIA’s modeling of Khamisiyah. We also interviewed U.S. Army officials at Dugway Proving Ground, Utah, to determine how chemical warfare agents might have been released during the Khamisiyah pit area demolitions. Finally, we interviewed officials at the U.S. Army Center for Health Promotion and Preventive Medicine, to determine how specific troop unit exposures were identified, and officials of the United Nations Monitoring, Verification, and Inspection Commission (UNMOVIC), to obtain information on source term data from the United Nations Special Commission’s (UNSCOM) analyses and inspections of the Khamisiyah, Al Muthanna, Muhammadiyat, and other sites. To determine the validity of DOD’s and the Department of Veterans Affairs’ (VA) conclusions—based on epidemiological studies—that there was no association between Khamisiyah exposure and the rates of hospitalization or mortality, we reviewed published epidemiological studies in which hospitalization and mortality among exposed and nonexposed U.S. troops were analyzed. We also interviewed the study authors and researchers and examined the Gulf War population databases provided to the researchers by DOD in support of these studies. We interviewed Veterans Benefits Administration officials and obtained documents and reports on their analyses of DOD’s population databases. We did not examine whether plume modeling data were being used by VA to determine eligibility for treatment or compensation. In an effort to identify the total costs associated with modeling and related analyses of chemical warfare agent releases during the Gulf War; we interviewed relevant officials and collected cost data from various DOD agencies and DOD contractors who supported the modeling efforts. To determine the extent of British troops’ exposure to chemical warfare agent-related releases during the Gulf War, we interviewed British Ministry of Defense (MOD) officials in London and at Porton Down, and reviewed U.K. Ministry of Defense reports concerning the potential effects of exposure to chemical warfare agent-related releases on British forces. We conducted our work from May 2002 through May 2004 in accordance with generally accepted government auditing standards. According to the CIA, modeling is the art and science of using interconnected mathematical equations to predict the activities of an actual event. In this case, modeling was used to determine the direction and extent of the plume from chemical warfare agents. In environmental hazard modeling, simulations recreate or predict the size and path (that is, the direction) of the plume, including the potential hazard area, and potential exposure levels are generated. In addition to identifying the appropriate event to model, modeling requires several components of accurate information: the characteristics or properties of the material that was released and its rate of release (for example, quantity and purity; the vapor pressure; the temperature at which the material burns; particle size; and persistency and toxicity); temporal information (for example, whether chemical agent was initially released during daylight hours, when it might rapidly disperse into the surface air, or at night, when a different set of breakdown and dispersion characteristics would pertain, depending on terrain, plume height, and rate of agent degradation); data that drive meteorological models during the modeled period (for example, temperature, humidity, barometric pressure, dew point, wind velocity and direction at varying altitudes, and other related measures of weather conditions); data from global weather models, to simulate large-scale weather patterns, and from regional and local weather models, to simulate the weather in the area of the chemical agent release and throughout the area of dispersion; and information on the potentially exposed populations, animals, crops, and other assets that may be affected by the agent’s release. Various plumes during the 1991 Gulf War were estimated using global- scale meteorological models, such as the National Centers for Environmental Prediction Global Data Assimilation System (GDAS) and the Naval Operational Global Atmospheric Prediction System (NOGAPS). Regional and local weather models were also used, including the Coupled Ocean-Atmosphere Mesoscale Prediction System (COAMPS), the Operational Multiscale Environmental Model with Grid Adaptivity (OMEGA), and the Mesoscale Model Version 5 (MM5). Transport and diffusion models were also used during the 1991 Persian Gulf War plume simulation efforts. These models estimate both the path of a plume and the degree of potential hazard posed by the chemical warfare agents. Dispersion models used during the Gulf War included the Hazard Prediction and Assessment Capability (HPAC) along with its component, the Second-order Closure Integrated Puff (SCIPUFF) model; the Vapor, Liquid, and Solid Tracking (VLSTRACK) model; the Non- Uniform Simple Surface Evaporation (NUSSE) model; and the Atmospheric Dispersion by Particle-in-Cell (ADPIC) model. DOD’s conclusion about the extent of U.S. troops’ exposure to chemical warfare agents during and immediately after the Gulf War, based upon DOD and CIA plume model estimates, cannot be adequately supported. This is because of uncertainty associated with the source term data and meteorological data. Further, the models themselves are neither sufficiently certain nor precise to draw reasonable conclusions about the size or path (that is, the direction) of the plumes. In particular, we found five reasons to question DOD’s conclusion. First, the models DOD and the CIA selected were in house models not fully developed for analyzing long-range dispersion of chemical warfare agents as environmental hazards. DOD and CIA officials selected several in-house models to run plume simulations. For Khamisiyah and the other Iraqi sites selected for examination, DOD selected the COAMPS and OMEGA meteorological models and the HPAC/SCIPUFF and VLSTRACK dispersion models. However, these models were not at the time fully developed for modeling long-range environmental hazards. Second, the assumptions about the source term data used in the models are inaccurate. The source term data DOD used in the modeling for sites at Khamisiyah, as well as Al Muthanna and Muhammadiyat, contain significant unreliable assumptions. DOD and the CIA based assumptions on field testing, intelligence information, imagery, UNSCOM inspections, and Iraqi declarations to UNSCOM. However, these assumptions were based on limited, nonvalidated, and unconfirmed data concerning (1) the nature of the Khamisiyah pit demolition, (2) meteorology, (3) agent purity, (4) amount of agent released, and (5) other chemical warfare agent data. In addition, DOD and the CIA excluded from their modeling efforts many other sites and potential hazards associated with the destruction of binary chemical weapons, vast stores of chemical warfare agent precursor materials, and the potential release of toxic byproducts and chemical warfare agents from other sites. Third, in most of the modeling performed, the plume heights were significantly underestimated. Actual plume height would have been significantly higher than the height DOD estimated in its modeling of demolition operations and bombings. The plume height estimates that the CIA provided for demolition operations at the Khamisiyah pit were 0 to 100 meters. However, neither DOD nor the CIA conducted testing to support estimated plume height associated with the bombings of Al Muthanna, Muhammadiyat, or Ukhaydir. According to DOD modelers, neither plume height nor any other heat or blast effects associated with these bombings were calculated from the models; instead, these data were taken from DOD’s Office of the Special Assistant for Gulf War Illnesses. In addition, according to a principal Defense Threat Reduction Agency modeler, DOD’s data on plume height were inconsistent with other test data for the types of facilities bombed. Fourth, postwar field testing at the U.S. Army Dugway Proving Ground, in Utah, to estimate the source term data did not realistically simulate the actual conditions of the demolition operations at Khamisiyah or the effects of the bombings at any of the other sites in Iraq. For field testing to be effective, conditions have to be as close to the actual event as possible, but these tests did not provide more definitive data for DOD and CIA’s models. The tests did not realistically simulate the conditions of the demolition of 122 mm chemical-filled rockets in Khamisiyah. The simulations took place under conditions that were not comparable with those at Khamisiyah. There were differences in meteorological and soil conditions; the construction material of munitions crates; rocket construction (including the use of concrete-filled pipes as rocket replacements to provide inert filler to simulate larger stacks); and the number of rockets, with far fewer rockets and, therefore, less explosive materials. In addition, in the tests, the agent stimulant used had physical properties different from those of the actual agent. Finally, there are wide divergences—with regard to the size and path of the plume and the extent to which troops were exposed—among the individual models DOD selected. The models DOD used to predict the fallout from Khamisiyah and the other sites showed great divergence, even with the same source term data. While the models’ divergences included plume size and paths, DOD made no effort to reconcile them. The IDA expert panel observed that the results were so divergent that it would not be possible to choose the most exposed areas or which U.S. troops might potentially have been exposed. IDA therefore recommended a composite model, which DOD adopted. However, this approach only masked differences in individual model projections with respect to divergences in plume size and path. In addition, DOD chose not to include in the composite model the results of the LLNL simulation, performed at the IDA expert panel’s request. The LLNL simulation estimated a larger plume size and different path from DOD’s models. The IDA panel regarded the LLNL model as less capable than other models because it modeled atmospheric phenomena with less fidelity. A modeling simulation done by the Air Force Technical Applications Center (AFTAC) also showed significant divergences from DOD’s composite model. According to British officials, the MOD did not collect any source term or meteorological data during the 1991 Persian Gulf War. It also did not independently model the plume from Khamisiyah, relying instead on the 1997 DOD and CIA modeling of Khamisiyah. However, according to British MOD officials, they were reassessing the extent of British troops’ exposure, based on DOD’s revised 2000 remodeling of Khamisiyah. We requested from the British MOD, but did not receive, information on the findings from this reassessment. The MOD also determined that a number of British troops were within the boundary of the plume in the DOD and CIA composite model. The MOD estimated that the total number of British troops potentially exposed was about 9,000 and the total number of troops as “definitely” within the path of the plume, however, was about 3,800. In addition, of 53,500 British troops deployed, at least 44,000 were estimated as “definitely not” within the path of the plume. However, since the MOD relied exclusively on DOD’s modeling and since we found that DOD could not know who was and who was not exposed, the MOD cannot know the extent of British troops’ exposure. The DOD and CIA were the primary agencies involved in the modeling and analysis of U.S. troops’ exposure from the demolition at Khamisiyah and bombing of chemical facilities at Al Muthanna, Muhammadiyat, and Ukhaydir, but several other agencies and contractors also participated. Funding to support the modeling efforts was provided to various DOD agencies and organizations, the military services, and non-DOD agencies and contractors. We collected data on the direct costs these agencies incurred or funds they spent. As shown in table 1, direct costs to the United States for modeling the Gulf War were about $13.7 million. DOD and VA each funded an epidemiological study on chemical warfare agent exposure—DOD’s on hospitalization rates and VA’s on mortality rates. From the hospitalization study, conducted by DOD researchers, and the mortality study, conducted by VA researchers, on exposed and nonexposed troops, DOD concluded that there was no significant difference in the rates of hospitalization and VA concluded no significant difference in the rates of mortality. These conclusions, however, cannot be supported by the available evidence. These studies contained two inherent weaknesses: (1) flawed criteria for classifying exposure, resulting in classification bias, and (2) an insensitive outcome measure, resulting in outcome bias. In addition, in several other published studies of 1991 Persian Gulf War veterans, suggest an associations between chemical warfare exposure and illnesses and symptoms have been established. In the two epidemiological studies, DOD and VA researchers used DOD’s 1997 plume model for determining which troops were under the path of the plume—who were estimated to be exposed—and which troops were not—those who were estimated to be nonexposed. However, this classification is flawed, given the inappropriate criteria for inclusion and exclusion. In the hospitalization study, the DOD researchers included 349,291 Army troops “coded” as being in the Army on February 21, 1991. However, the researchers did not report cutoff dates for inclusion in the study—that is, they did not indicate whether these troops were in the Persian Gulf between January 17, 1991, and March 13, 1991, the period during which the bombings and the Khamisiyah demolition took place. Although we requested this information, DOD researchers failed to provide it. Finally, the total number of 349,291 troops is misleading because many troops left the service soon after returning from the Persian Gulf and therefore would not have been hospitalized after the war in a military hospital—another criterion for inclusion in the study. Moreover, the researchers did not conduct any analyses to determine what number or percentage of those who left active duty were in the exposed or nonexposed group (including uncertain low-dose exposure or estimated subclinical exposure). Given all the methodological problems in this study, it is not possible to accurately estimate the total size or makeup of the exposed and nonexposed population that may have sought or may have been eligible for care leading to military hospitalization. In the mortality study, the VA researchers included 621,902 Gulf War veterans who arrived in the Persian Gulf before March 1, 1991. Troops who left before January 17, 1991—the beginning of the bombing of Iraqi research, production, and storage facilities for chemical warfare agents— were included in the study. This group was not likely to have been exposed. Therefore, including them resulted in VA’s overestimation of the nonexposed group. Troops who came after March 1, 1991—the period during which Khamisiyah demolition took place—were excluded from the VA study. The Defense Manpower Data Center (DMDC) identified 696,000 troops deployed to the Persian Gulf, but the mortality study included only the 621,902 troops deployed there before March 1, 1991. This decision excluded more than 74,000 troops, approximately 11 percent of the total deployed. In addition, 693 troops who were in the exposed group were excluded because identifying data, such as Social Security numbers, did not match the DMDC database. VA researchers did not conduct follow-up analysis to determine whether those who were excluded differed from those who were included in ways that would affect the classification. Hospitalization rates—the outcome measure used in the hospitalization study—were insensitive because they failed to capture the chronic illnesses that 1991 Persian Gulf War veterans commonly report, but that typically do not lead to hospitalization. Studies that rely on this type of outcome as an end point are predetermined to overlook any association between exposure and illness. Based on DOD’s 1997 plume model, DOD’s hospitalization study compared the rates for 1991 Persian Gulf War veterans who were exposed with the rates for those who were nonexposed. This study included 349,291 active duty Army troops who were deployed to the Persian Gulf. However, DOD researchers did not determine the resulting bias in their analyses, because they did not account for those who left the service. The Institute of Medicine noted that the hospitalization study was limited to Army troops remaining on active duty and to events occurring in military hospitals. Conceivably, those who suffered from Gulf War-related symptoms might leave active duty voluntarily or might take a medical discharge. Hospitalization for this group would be reflected in VA or private sector databases, but not in DOD databases. The health or other characteristics of active duty troops could differ from those of troops who left active duty and were treated in nonmilitary hospitals. Moreover, economic and other factors not related to health are likely to affect the use of nonmilitary hospitals and health care services. This limiting of the study to troops remaining on active duty produced a type of selection bias known as the healthy warrior effect. It strongly biased the study toward finding no excess hospitalization among the active duty Army troops compared with those who left the service after the war. We found some studies that suggest an association between chemical warfare agent exposure and Gulf War illnesses. Each of these studies has both strengths and limitations. In one privately funded study of Gulf War veterans, Haley and colleagues reported an association between a syndromic case definition of Gulf War illnesses, based upon the ill veterans’ symptomatic complaints, with exposure to chemical warfare agents. Factor analysis of the data on symptoms was used to derive a case definition identifying six syndrome factors. Three syndrome factor variants found to be the most significant were (1) impaired cognition, (2) confusion-ataxia, and (3) arthro-myo-neuropathy. In evaluating the plume models used, the results from the DOD and CIA modeling can never be definitive. Plume models can allow only estimates of what happens when chemical warfare agents are released in the environment. Such estimates are based on mathematical equations, which are used to predict an actual event—in this case, the direction and extent of the plume. However, in order to predict precisely what happens, one needs to have accurate data on relative to both source term and meteorological conditions. DOD had neither of these. Given the unreliability of the input data, the lack of individual troop location information, and the widely divergent results of the simulations conducted based on varying models, DOD’s analyses cannot adequately estimate the extent of U.S. troops’ exposure to chemical warfare agents and other related releases. In particular, the models selected were not fully developed for projecting long-range environmental fallout, and the assumptions used to provide the source term data were inaccurate or flawed. Even when models with the same source term data were used, the results diverged. In addition, the models did not include many potential exposure events and exposures to some key materials—for example, binary chemical weapons, mustard agent combustion by-products, and chemical warfare agent precursor materials. It is likely that if models were more fully developed and more credible data for source term and meteorological conditions were included in them, particularly with respect to plume height as well as level and duration of exposure, the hazard area would be much larger and most likely would cover most of the areas where U.S. troops and Coalition forces were deployed. However, given the lack of verifiable data for analyses, it is unlikely that any further modeling efforts would be more accurate or helpful. The results of DOD’s modeling efforts were, nonetheless, used in epidemiological studies to determine the troops’ chemical warfare agent exposure classification—i.e., exposed versus nonexposed. As we noted in 1997, to ascertain the causes of veterans’ illnesses, it is imperative that investigators have valid and reliable data on exposure, especially for low- level or intermittent exposures to chemical warfare agents. To the extent that veterans are misclassified as to exposure, relationships will be obscured and conclusions misleading. In addition, DOD combined the results of individual models that showed smaller plume size and ignored the results of the LLNL which showed much larger plume size and divergent plume path. Given the uncertainties in source term data and divergences in model results, DOD cannot determine or estimate—with any degree of certainty—the size and path of the plumes or who was or who was not exposed. In our report, we are recommending that the Secretary of Defense and the Secretary of Veterans Affairs not use the plume-modeling data for future epidemiological studies of the 1991 Gulf War, since VA and DOD cannot know from the flawed plume modeling who was and who was not exposed. We are also recommending that the Secretary of Defense require no further plume-modeling of Khamisiyah and the other sites bombed during the 1991 Persian Gulf War in order to determine troops’ exposure. Given the uncertainties in the source term and meteorological data, additional modeling of the various sites bombed would most likely result in additional cost, while still not providing DOD with any definitive data on estimating who was or was not exposed. We obtained comments on a draft of this report from VA, DOD, and CIA. VA concurred with the recommendation that VA and DOD not use the plume-modeling data for future epidemiological studies, since VA and DOD cannot know from the flawed plume modeling who was and who was not exposed. DOD did not concur with the recommendation, indicating that to them it called for a blanket prohibition of plume modeling in the future, where the limitations of the 1991 Gulf War may not apply. The intent of our recommendation is only directed at epidemiological studies involving the DOD and CIA plume modeling data from the 1991 Gulf War and not a blanket prohibition of plume modeling in the future. We have clarified the recommendation along these lines. DOD concurred with our second recommendation, indicating that despite enhancements in the models, uncertainties will remain. CIA did not concur with our report, indicating that it could not complete its review in the time allotted. If you or your staff have any questions about this testimony or would like additional information, please contact me at (202) 512-6412 or Sushil Sharma, Ph.D., Dr.PH., at (202) 512-3460. We can also be reached by e-mail at [email protected] and [email protected]. Individuals who made key contributions to this testimony were Venkareddy Chennareddy, Susan Conlon, Neil Doherty, Jason Fong, Penny Pickett, Laurel Rabin, and Katherine Raheb. James J. Tuite III, a GAO consultant, provided technical expertise. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Since the end of the Gulf War in 1991, many of the approximately 700,000 U.S. veterans have experienced undiagnosed illnesses. They attribute these illnesses to exposure to chemical warfare (CW) agents in plumes--clouds released from bombing of Iraqi sites. But in 2000, the Department of Defense (DOD) estimated that of the 700,000 veterans, 101,752 troops were potentially exposed. GAO was asked to evaluate the validity of DOD, the Department of Veterans Affairs (VA), and British Ministry of Defense (MOD) conclusions about troops' exposure. DOD's and MOD's conclusion about troops' exposure to CW agents, based on DOD and CIA plume modeling, cannot be adequately supported. The models were not fully developed for analyzing long-range dispersion of CW agents as an environmental hazard. The modeling assumptions as to source term data--quantity and purity of the agent--were inaccurate because they were uncertain, incomplete, and nonvalidated. The plume heights used in the modeling were underestimated and so were the hazard area. Postwar field testing used to estimate the source term did not realistically simulate the actual conditions of bombings or demolitions. Finally, the results of all models--DOD and non-DOD models--showed wide divergences as to the plume size and path. DOD's and VA's conclusion about no association between exposure to CW agents and rates of hospitalization and mortality, based on two epidemiological studies conducted and funded by DOD and VA, also cannot be adequately supported because of study weaknesses. In both studies, flawed criteria--DOD's plume model and DOD's estimation of potentially exposed troops based on this model--were used to determine exposure. This may have resulted in large-scale misclassification. Troops under the path of the plume were classified as exposed; those not under the path, as nonexposed. But troops classified as not exposed under one DOD model could be classified as exposed under another DOD model. Under non-DOD models, however, a larger number of troops could be classified as exposed. Finally, as an outcome measure, hospitalization rate failed to capture the types of chronic illnesses that Gulf War veterans report but that typically do not lead to hospitalization.
Dumping refers to a type of international price discrimination wherein a foreign company sells merchandise in a given export market (for example, the United States) at prices that are lower than the prices that the company charges in its home market or other export markets. When this occurs, and when the imports have been found to materially injure, or threaten to materially injure, U.S. producers, U.S. law permits application of antidumping duties to offset the price advantage enjoyed by the imported product. Any domestic industry that believes it is suffering material injury, or is threatened with material injury, as a result of dumping by foreign companies may file a petition requesting imposition of AD duties. Interested domestic industries file petitions simultaneously with Commerce and ITC. If Commerce determines that the petitioning parties meet certain eligibility requirements, ITC determines whether the domestic industry has suffered material injury as a result of the alleged dumping (or is threatened with material injury). While ITC is completing its work, Commerce conducts an investigation to establish the duty rates, if any, that should be applied. To determine the duty rates to apply in an antidumping investigation, Commerce identifies (1) the foreign product’s export price entering the U.S. market and (2) its “normal value.” Commerce then compares these prices to determine whether—and by how much—the product’s export price is less than its normal value. AD duty rates are based on these differences, which are called dumping margins. To establish a product’s export price, Commerce generally refers to the prices charged in actual sales of that product to purchasers in the United States. To establish its normal value, Commerce generally refers to the prices charged for the product in the exporting company’s home market. In the event that the product is not sold in the exporter’s home market, Commerce may refer to prices charged for the product in another export market or construct a normal value based on costs of production in the exporting country, together with selling, general and administrative expenses, and profit. The two agencies make preliminary and, after additional investigation, final determinations as to whether injury has occurred (ITC) and the size of the duty, if any, that should be imposed (Commerce). When warranted, Commerce issues “duty orders” instructing Customs and Border Protection to apply duties against imported products from the countries under investigation. Both ITC and Commerce publish their decisions in the Federal Register. Since AD duties address unfair pricing practices, and pricing decisions are generally made by individual companies, Commerce generally calculates and assigns AD duty rates on an individual company basis. As a result, AD investigations generally produce a number of individually determined, company-specific rates, reflecting differences in the extent to which companies have dumped their products—that is, exported them at less than their normal value. In addition, AD duty orders also generally specify a duty rate for other companies that have not been assigned an individually determined rate. In principle, Commerce bases its AD duty determinations on information obtained from interested parties—including foreign producers and exporters. Commerce obtains needed information from foreign companies by sending them questionnaires and following up with additional questions, as needed, and with on-site visits. However, both U.S. law and WTO rules recognize that, in some cases, officials charged with completing these investigations will be unable to obtain sufficient information. In such cases, Commerce officials apply facts available to complete their duty determinations. This may include secondary information, subject to corroboration from independent sources. Moreover, if Commerce finds that an interested party, such as a foreign company under investigation, “has failed to cooperate by not acting to the best of its ability to comply with a request for information” then, in selecting among the facts available, Commerce may apply an inference that is adverse to the interests of that party. In applying adverse inferences, Commerce can use (among other things) information contained in the petition filed by the domestic industry seeking imposition of AD duties, the results of a prior review or determination in the case, or any other information placed on the record. This authority provides an incentive for foreign companies to provide the information that Commerce needs to complete its work. For example, in a 1993 case that involved two Brazilian companies, one company attempted to cooperate in the investigation but nonetheless was unable to provide the information that Commerce needed, while the other declined to provide any information at all. Commerce used facts available to determine that the first company should be subject to a duty rate of 42 percent. For the second company, Commerce selected adverse inferences from among the facts available and applied these to calculate a duty rate of 109 percent. The methodology that Commerce employs in NME cases differs from Commerce’s usual (market economy) approach in two key ways. First, rather than rely entirely on information from the exporting country itself to establish a product’s normal value, Commerce uses price information from surrogate countries to construct these values. Second, rather than consider all companies eligible for individually determined duty rates, Commerce requires NME companies to meet certain criteria to be considered eligible for such rates. Commerce generally employs different approaches to calculate duty rates for companies that do and do not meet these criteria. In AD investigations involving products from NME countries, U.S. law requires Commerce to use a special methodology to calculate duty rates in view of the absence of meaningful home market prices and information on production costs. When a product from China or another NME country is the target of an AD investigation, Commerce officials use price information and financial data from an appropriate market economy country to construct a normal value for the product under investigation. India is the most commonly used surrogate for China. To apply this methodology, Commerce (1) identifies and quantifies the factors of production (e.g., various raw materials) used by the NME producers, (2) identifies market prices for each factor in a surrogate country; (3) multiplies volume times cost for each factor; and (4) adds the results, together with a reasonable margin for selling, general and administrative expenses, and profit (based on surrogate country financial data), to produce a constructed normal value. The dumping margin—and consequently the AD duty rate—is then determined by comparing this normal value with the NME company’s export price to the United States. While all companies from market economy countries are eligible for individually determined or weighted average AD duty rates, companies from China and other NME countries must pass a separate rates test to be eligible for such rates. This test requires NME companies to meet two closely related criteria: they must demonstrate that their export activities are free from government control both in law and in fact. To provide a basis for deciding whether companies meet these criteria, Commerce requires these companies to submit information regarding whether there are restrictive stipulations associated with an individual exporter’s business and export licenses, any legislative enactments decentralizing control of companies, any other formal measures decentralizing government control of whether export prices are set by or subject to approval by the government, whether the company has authority to negotiate and sign contracts, whether the company has autonomy in selecting its management, and whether the company retains the proceeds of its export sales and makes independent decisions regarding disposition of profits or financing of losses. As shown in figure 1, Commerce uses fundamentally different approaches to calculate duty rates to be applied against companies that do and do not pass the separate rates test. As shown in figure 1, Commerce treats companies from China and other NME countries that pass Commerce’s separate rates test like market economy countries when assigning duty rates. When practical, Commerce fully investigates and establishes individually determined duty rates for each eligible NME company, just as it does for each market economy company. To the extent that fully investigated NME companies cooperate with Commerce, they receive rates based on the information that they provide. As explained in the background section of this report, Commerce uses facts available, and may use adverse inferences, to calculate duty rates when the companies under investigation cannot or will not provide the information that Commerce needs. In both NME and market economy cases, Commerce may limit the number of companies it fully investigates when it is faced with a large number of companies. In such situations, Commerce generally calculates individual rates for the companies that account for the largest volume of the subject merchandise. In market economy cases, Commerce then calculates a weighted average of these rates and applies the resulting “all others” rate to companies that it has not fully investigated. Commerce does not routinely calculate weighted average duty rates in NME cases. However, when the number of NME companies eligible for individually determined rates exceeds the number that Commerce can fully investigate, Commerce calculates a weighted average rate and informs Customs of the companies entitled to this rate. In cases involving China or other NME countries, Commerce calculates a country-wide duty rate for companies that could not (or did not attempt to) pass Commerce’s separate rates test. In NME cases, Commerce assumes that all exporters and producers of a given product are subject to common government control and that all of these companies should, therefore, be subject to a single country-wide duty rate. Commerce begins its NME antidumping investigations by requesting information from the government of the country in question and from known producers and exporters. If Commerce cannot identify all relevant producers and exporters, or if one or more of the identified companies refuses to cooperate in the investigation, Commerce relies on adverse inferences to calculate a country-wide rate. Commerce then instructs Customs to apply the country- wide rate against shipments from any company other than those specifically listed as eligible for an individually determined or weighted average rate. Over the last 25 years, the United States has applied AD duties against Chinese products more often than against products from any other country. While AD duty rates have varied widely, on average the rates assigned to Chinese products have been higher than the rates assigned to the same products from market economy countries. We found that this is attributable primarily to the comparatively high country-wide rates applied to Chinese companies not eligible for individually determined or weighted average rates. When Commerce has calculated rates for individual Chinese companies, the average rates assigned to these companies have not been substantially different from those assigned to market economy companies. Over the last 25 years, Commerce has both considered and actually applied AD duties against China more often than against any other country. From 1980 through 2004, Commerce processed 1,046 AD petitions and issued 455 AD duty orders. One hundred and ten of these petitions (11 percent) and 68 of these orders (15 percent) focused on China—both are the largest number against any U.S. trade partner. The number of orders applied to China varied from year to year. For example, Commerce issued no AD duty orders against China in 1998 but issued 9 in 2003. Commerce had 272 orders in place as of December 31, 2004. Fifty-five of these (20 percent) apply to China. As figure 2 shows, this is also the highest percentage of any country. As shown in table 1, these duty orders have targeted a wide variety of products but have been concentrated in chemicals and plastics, metal products, and agricultural products. Italy (13 orders) Brazil (14 orders) Taiwan (17 orders) South Korea (19 orders) Japan (29 orders) All others (53 orders) Over this 25 year period, Commerce issued duty orders against the same products from China and at least one market economy country on 25 occasions. In 18 of these cases, Commerce calculated individual rates for companies from China and at least one market economy country. Fifteen of these cases involved more than one market economy country. In all, the orders applying to these 25 products contained a total of 243 individual, weighted average, and country-wide duty rates. Appendix II provides detailed information on the rates applied in each of these cases, as well as another 43 cases that we identified wherein Commerce applied duty rates to China but not to any market economy country. These rates varied a great deal—both among the orders applied to different products and within the orders applied to the same products. Overall, these duty rates varied from zero to 218 percent for China and from zero to about 244 percent for market economy countries. Figure 4 shows the extent to which duty rates applied to a single product can vary. The average AD duty rates imposed on Chinese (NME) exporters over the last 25 years have been significantly higher than those imposed on market economy exporters of the same products. Taking all rates into consideration (including those calculated for individual companies, weighted averages of these rates, and country-wide rates applied to China) the average rate applied to Chinese companies in the 25 cases we examined was about 67 percent—over 20 percentage points higher than the average rate of 44 percent applied to market economy companies. As figure 5 shows, the overall average rates applied against China were higher for 18 of the 25 products in which there were AD orders against both China and at least one market economy. The difference between average China and average market economy duty rates was due primarily to the fact that the NME country-wide duty rates applied to China were substantially higher than the comparable all-others duty rates applied against market economy countries. In contrast, the individually determined duty rates assigned to Chinese companies in these cases were not substantially different, on average, from the individually determined rates assigned to market economy companies. On average, the country-wide rates applied to China in these 25 cases were substantially higher than the comparable all-others rates applied to market economy countries. The country-wide duty rates applied against China averaged about 98 percent—over 60 percentage points higher than the average 37 percent all-others duty rate applied to market economy exporters of the same products. Figure 6 shows that the China country- wide rate was higher than the market economy all-others rate in 21 of 25 cases. As explained below, this difference was due largely to the use of different methodologies to calculate country-wide and all-others rates. Country-wide rates were nearly always equal to or higher than the highest individually determined rate applied to any Chinese company, due to application of adverse inferences. According to Commerce, NME country governments themselves have never provided the information that Commerce needs to establish an appropriate country-wide duty rate. In addition, Commerce officials stated that, in most cases, participating NME companies have accounted for only a portion of known exports to the U.S. market from their country, indicating that others had not come forward. In most cases, therefore, Commerce has used adverse inferences to determine country-wide rates. For example, in its investigation of carbazole violet pigment, Commerce assigned three fully investigated Chinese companies individually determined rates of about 6, 27, and 45 percent. However, since other known Chinese producers did not respond to Commerce’s request for information, Commerce used adverse inferences to determine that all other Chinese producers should be subject to an NME country-wide rate of about 218 percent. In contrast, the comparable market economy all-others rates were lower than the highest individual company rates assigned in any given case (if more than one other individual rate was assigned). This is because, as discussed earlier, Commerce generally calculates all-others rates by averaging individually determined rates—excluding those derived entirely through application of facts available and those that are de minimis or zero. With regard to carbazole violet pigment, for example, Commerce investigated not only China but also India. Commerce assigned two fully investigated Indian companies rates of about 10 and 50 percent and weight- averaged these rates to determine that shipments from all other Indian producers should be subject to a duty rate of about 27 percent. On average, there was little difference between the individually determined rates applied to companies from China and those applied to market economy companies. The average individually determined rate applied to Chinese companies in these cases was 53 percent—a little less than the average rate of 55 percent applied to market economy companies. The median rate for Chinese companies was 42 percent—the same as the median rate for market economy companies. Figure 7 displays the average individual company rates assigned to Chinese and market economy companies in the 18 cases in which Commerce assigned individual rates to both. As the figure shows, the rates assigned to Chinese companies were higher than the market economy rates in ten of these cases and lower in the other eight. Our statistical analyses provided additional support for the importance of the country-wide rates in accounting for the overall difference between the duty rates applied to China and to market economy countries. Using multivariate regression analysis, we found that a number of variables, such as the type of product involved, accounted for some of the overall variation in duty rates. However, after accounting for the China country-wide rates there was no statistically significant difference between the duty rates applied to China and those applied to market economy countries. As explained in more detail in appendix III, we found essentially the same results when we expanded our analyses to include data on AD actions against NMEs other than China. In certain circumstances, Commerce may stop using its NME methodology in China cases—and thus begin applying its market economy methodology to determine AD duty rates against that country. Such a step would lead to important changes in the methods that Commerce employs to determine China AD duty rates and in the duty orders resulting from these proceedings. These changes would have mixed results. Eliminating country-wide duty rates would likely reduce duty levels for Chinese companies that are not assigned individually determined rates. Individually determined rates would likely diverge into two distinct groups, with companies that do not cooperate in Commerce investigations receiving rates that are substantially higher than those assigned to companies that do cooperate. The impact of applying Chinese price information to calculate the normal value of Chinese products would likely vary by industry. In any case, rates would continue to vary widely based on the circumstances of each case. While trade data that would permit analysis of the potential trade impact of these changes is not available, it appears that the trade significance of country-wide duty rates is declining. Commerce has administrative authority to reclassify China and other NME countries as market economies or individual NME country industries as market-oriented in character. Such reclassifications would end Commerce’s authority to apply its NME methodology to such countries or industries. Also, China’s WTO accession agreement specifies that members may apply third-country information to calculate AD duty rates against that country, but this provision expires in 2016. Commerce has the authority to reclassify China as a market economy country, in whole or in part. As we explained in more detail in a prior report, U.S. trade law authorizes Commerce to determine whether countries should be accorded NME or market economy status and specifies a number of criteria for Commerce to apply in making such determinations. Countries classified as NMEs may ask for a review of their status at any time. China has actively sought market economy status among its trading partners, and a number of them have designated China as a market economy. However, Commerce informed us that Chinese officials have not yet officially requested a determination as to whether their country merits reclassification under the criteria specified in U.S. law. In April 2004, the United States and China established a Structural Issues Working Group under the auspices of the U.S.-China Joint Commission on Commerce and Trade. This group is examining structural and operational issues related to China’s economy that may give rise to bilateral trade frictions, including issues related to China’s desire to be classified as a market economy. Commerce also has the authority to designate individual NME industries as market oriented in character, but has denied all such requests to date. Commerce determined in a 1992 case against China that, short of finding that an entire country merits designation as a market economy, it could find specific industries within such countries to be market oriented in character. Commerce officials noted that on several occasions Chinese industries responding to antidumping duty petitions have requested designation as market-oriented industries. To date, Commerce has denied such requests—primarily on the grounds that the Chinese companies in question submitted information that was insufficient or was provided too late in Commerce’s process to allow an informed decision. When joining the WTO, China agreed that other WTO members could use third-country information to calculate normal values in antidumping actions against Chinese companies. Specifically, China’s WTO accession agreement provides that in determining price comparability in antidumping investigations WTO members may use “a methodology that is not based on a strict comparison with domestic prices or costs in China.” However, the accession agreement also specifies that this provision will expire 15 years after the date of the agreement—that is, by the end of 2016. After 2016, the ability of WTO members to continue using third-country information in AD calculations involving China would be governed by generally applicable WTO rules, according to officials at the Office of the U.S. Trade Representative. These rules recognize that when dumping investigations involve products from a country that “has a complete or substantially complete monopoly of its trade and where all domestic prices are fixed by the state,” importing country authorities may have difficulty making the price comparisons through which AD duty rates are normally established. In such situations, importing countries may “find it necessary to take into account the possibility that a strict comparison with domestic prices in such a country may not always be appropriate.” WTO rules do not provide any specific guidance about how this provision should be implemented; such decisions appear to be left up to individual members. Ending application of the NME methodology to China would bring two significant procedural changes in AD duty investigations against Chinese products. First, such a step would eliminate NME country-wide duty rates from China AD orders. Commerce would instead assign an individually determined rate to every relevant Chinese producer or exporter. If the number of companies involved were too great to allow full investigation of all relevant companies, Commerce would apply an all-others rate—a weighted average of the individually determined rates to all other Chinese companies (excluding those rates based entirely on facts available or that are de minimis or zero). However, Commerce would retain its authority to use facts available to determine the rates that it applies to individual Chinese companies. Second, transition to the market economy methodology would end Commerce’s use of surrogate country information to calculate the normal value of Chinese products. Application of the market economy methodology would generally require Commerce to set the normal value of Chinese products equal to their sales price in China. If the product were not sold in China, Commerce could refer to prices charged for the product in another export market or construct the product’s normal value, or it could continue to construct the product’s normal value—using factor prices from the Chinese companies under investigation rather than from surrogate countries. The elimination of country-wide duty rates against China would likely reduce the duty rates applied to some Chinese companies. If Commerce applied its market economy approach to China, duty rates for companies not receiving individually determined rates would, in most cases, no longer be determined by applying facts available. Rather, Commerce would, for the most part, determine these rates by averaging the rates applied to fully investigated Chinese companies, with some exclusions. The default rate for uninvestigated Chinese companies would move, in most cases, from being the highest rate found to the average rate found among companies that cooperate in Commerce investigations. Though not predictive, available evidence suggests that the all-others rates that Commerce would apply to China under the market economy methodology would be significantly lower than the country-wide rates currently applied to that country. As already shown, China country-wide rates have generally been significantly higher than the all-others rates that Commerce has assigned to market economy sources of the same products. As shown in table 2, the average country-wide rate for the 25 cases in which Commerce assigned duties to both China and one or more market economies was 98 percent, while the average market economy all-others rate was 37 percent. The average rate assigned to individual Chinese companies was 53 percent, and Commerce calculates all-others rates by weight averaging individually determined rates, excluding those that are derived entirely through application of facts available and those that are de minimis or zero. A simple comparison of the average individually determined duty rates calculated under the NME and market economy methodologies suggests that a change in methodology would not result in any significant overall change in duty rates applied to individual Chinese companies. For the comparable cases, individual AD duty rates for Chinese companies averaged 53 percent and were not substantially different from individual market economy company rates, which averaged 55 percent. However, a more detailed examination of the data indicates that the impact of a change in methodology on individual Chinese company duty rates would depend on the extent to which Commerce applies adverse inferences to calculate these rates. The rates assigned to individual companies under the market economy methodology fell into two distinct groups, depending on whether the companies cooperated with Commerce investigations. In the 25 cases that we examined in detail, about half of the fully investigated market economy companies cooperated with Commerce. On average, Commerce assigned a duty rate of about 17 percent to these companies. Commerce found the other half of the fully investigated companies uncooperative and, therefore, applied adverse inferences to determine the duty rates to be applied to these companies. On average, Commerce assigned a duty rate of about 77 percent to these uncooperative market economy companies. Though not predictive, this suggests that a change from the NME methodology for China would result in a significant number of (cooperative) companies receiving relatively low rates, while another significant group of (uncooperative) companies would receive relatively high rates. Our regression analysis confirmed the importance of adverse inferences as a determinant of variation in duty rates. As explained in appendix III, we found that application of adverse inferences tends to increase duty rates by a large margin. The impact of using Chinese price information on China AD duty rates would likely vary from one industry to another under the market economy methodology. Chinese prices are widely viewed as distorted to varying degrees. Where prices for key inputs are artificially low, relying on Chinese price information would produce an artificially low normal value. The result would be an AD duty that is lower than would be obtained by applying surrogate country input prices. Conversely, where Chinese prices are artificially high, AD duty rates may be higher if based on Chinese prices. To the extent that Chinese economic reforms bring Chinese prices more into line with world markets, the impact of abandoning the use of surrogate country information can be expected to decline. At any point in time, however, the probable effect of such a methodological change in an individual industry investigation would depend on the particular facts applying to that industry. The net impact of changing the source of price information on overall China duty rates cannot be estimated with confidence. Our multivariate regression analyses suggest that, regardless of changes in methodology, there will continue to be a great deal of variation among the AD duty rates applied to products from China and other countries. Our analyses showed that application of country-wide duty rates to China largely explained the difference between the overall average duty rates applied to China and to market economy countries. Eliminating these rates would likely have a substantial overall reducing effect on China rates. However, a number of other factors, such as the type of product involved, also helped to account for differences among rates overall, and these factors will continue to have an impact on duty rates, regardless of whether Commerce applies country-wide rates to China. Furthermore, even after taking these factors into account, our analyses still explained only about half of the total variation in duty rates. This means that about half of the variation in duty rates is attributable either to idiosyncratic factors or to systematic factors that we did not capture in any of our variables. Available evidence suggests that the volume of trade affected by country- wide rates is declining and that, consequently, the trade impact of China duty orders will in the future depend increasingly on the magnitude of the individually determined rates. Commerce officials observed that in the early 1980s it was not unusual for China AD duty investigations to produce only a country-wide rate. However, as the Chinese economy has evolved, individual Chinese companies have become more likely to request—and receive—individually determined or weighted average rates. Since 1980, Commerce has applied country-wide rates alone in only 15 of 68 Chinese AD orders, and the last of these occasions was in 1995. The majority of all Chinese AD orders (about 78 percent), and all such orders issued over the last 10 years, have included at least one individual company rate. Neither Commerce nor Customs and Border Protection maintain trade data that would permit analysis of changes over time in the relative volume or value of products imported into the United States under the country-wide or various individual duty rates listed in AD duty orders. However, as figure 8 shows, the average number of Chinese companies assigned individually determined rates (or assigned a weighted average rate) has been growing, though there continues to be variation from year to year. For example, in 2004 Commerce placed duties on six Chinese products and in doing so assigned individually determined or weighted average rates to 53 Chinese companies. Anecdotal evidence suggests that along with this rise in company interest in obtaining individual rates has come an increase in the volume of trade covered by these rates. For example, in one recent case Commerce fully investigated and assigned individually determined rates to four companies accounting for more than 90 percent of Chinese exports to the U.S. market. Commerce then assigned a weighted average of these rates to 9 additional companies, leaving only a very small portion of all Chinese exports to be covered by the country-wide rate. On average, Commerce’s application of its NME methodology has produced AD duties on Chinese products that are substantially higher than those applied to the same products from market economy countries. Changing China’s NME status—and thus eliminating the application of this methodology—would have a variety of impacts. The duty rates applied to companies that do not receive individual rates would likely decline. Chinese companies that cooperate in Commerce investigations may also receive comparatively low rates. However, the impact of these lower rates on overall China averages may be offset, to some extent, by application of adverse inferences to assign relatively high rates to individual Chinese companies that do not cooperate in Commerce investigations. The net effect of these changes cannot be predicted. Such a prediction would require knowledge of price distortions in diverse Chinese industries, changes in these distortions over time, pricing decisions by Chinese companies in reaction to these changes, and decisions by U.S. companies about whether they should seek relief. Nonetheless, while the NME methodology is applied, it appears that the actual trade impact of using this methodology will decline as the portion of total export trade conducted by Chinese companies assigned individual rates increases, and as the country- wide rates that largely account for the comparatively high average rates applied to China decline in importance. The Department of Commerce provided written comments on a draft of this report. These comments are reprinted in appendix IV. Overall, Commerce agreed with our findings, observing that the report provided timely and helpful information on the NME methodology and its application to China. Commerce identified a small number of apparent errors in our database. We re-examined our data, making corrections when necessary, and updated our analyses; these corrections did not have any significant impact on our findings. Commerce also made a number of technical comments, focusing primarily on our description of its NME methodology. We took these comments into consideration and made changes throughout the report to insure its clarity and accuracy. We also made a number of technical corrections suggested by the Department of Homeland Security and the Office of the U.S. Trade Representative. We are sending copies of this report to the Secretaries of Commerce and Homeland Security, the International Trade Commission, the U. S. Trade Representative, appropriate congressional committees, and other interested parties. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or any of your staff have any questions about this report, please contact me at 202-512-4347 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. To address our objectives, we examined and summarized applicable U.S. laws and regulations, as well as relevant World Trade Organization (WTO) agreements. These included the Agreement on Implementation of Article VI of the General Agreement on Tariffs and Trade 1994—commonly known as the “antidumping agreement”—and China’s WTO accession agreement. We conducted a literature search and reviewed relevant scholarly and legal analyses and Department of Commerce (Commerce) determinations. In order to corroborate and broaden our understanding, we consulted with trade and legal policy experts from the U.S. government, private sector trade associations, consulting firms, academic institutions, law firms with broad experience in trade actions involving China, as well as representatives of the WTO, the government of China, and other governments concerned about Chinese trade practices, including the European Union and Canada. In order to analyze the application of antidumping (AD) duties to China and compare duty rates applied to China with those applied to market economy countries (our second objective) and to evaluate the potential impact of ceasing to apply the nonmarket economy (NME) methodology to China (our third objective), we collected information from the Department of Commerce and the International Trade Commission, including notices of Commerce determinations appearing in the Federal Register. We used this information to construct a database on all U.S. AD investigations from 1980 through 2004. In addition to information on the countries and products involved and the status of each investigation, our database included the duty rates applied upon completion of each new antidumping investigation against China during this period, as well as the duty rates applied against any producers of the same products from other countries. This database is accessible on-line at www.gao.gov/cgi-bin/getrpt?GAO-06-652SP. We verified this database to the official sources and found the data to be sufficiently reliable for the purposes of this report. Our analyses focused on the 68 cases during this time period wherein Commerce imposed AD duties on Chinese products, and especially on the subset of 25 cases in which Commerce imposed duties against a similar product from one or more market economy countries. Specifically, the 25 cases included all market economy cases that had the same product name and were initiated within 1 year of an AD investigation against China. In all, we assembled data on 303 company-specific, weighted average, and country-wide duty rate determinations on Chinese products, and an additional 168 duty rate determinations on market economy products. Appendix II provides additional analyses of this data. As part of our examination, we also performed multivariate regression analyses to determine the extent to which duty rate variations could be attributed to differences between China and these other countries, or to other factors, such as the type of product involved. Appendix III provides more information on these analyses and their results. In addition to comparing China and market economies, we also collected information on duty rates that Commerce applied to products from other NME countries at the same time as it applied them to similar products from either China or a market economy. Appendix III provides information on the results of our analyses of this data. We did not collect or analyze information on duty rates applied against market economy countries in cases where no parallel action was taken against China or any other NME country. Therefore, our analyses of market economy duty rates are specific to the sample of market economy orders in which a corresponding NME order was also in effect. Inclusion of other market economy product duty rates may have produced different results. However, we determined that the appropriate comparison between China and market economy countries was between the 25 similar products. We found through our regression analyses (discussed in app. III) that the product being investigated does help explain the variation among rates and it is, therefore, important to make an appropriate comparison. In addition, duty rates for the 43 remaining orders against China alone followed a similar pattern as those contained in the 25 cases where we drew comparisons with market economy duty rates. The average country-wide rate for these 43 orders against China was higher than the country-wide rate for the 25 orders (118 percent compared to 98 percent), and the average individual rate was lower (41 percent compared to 53 percent) for the 18 orders with individual rates. These results were consistent with our findings that the country-wide rates tend to be significantly higher than individual rates. In order to group specific products subject to AD orders into groups of similar products, we used the Harmonized Tariff Schedule (HTS) classifications for each product, as reported in the Federal Register announcement of the order. The HTS is the official U.S. classification of goods imported into the United States and includes 99 chapters covering all goods imports. In addition, the HTS chapters are grouped into larger sections that cover broad types of related products. The categories we used in this report are based on these HTS sections and chapters. Specifically, the category “Chemicals, plastics, pharmaceuticals” comprises HTS chapters 28 through 40 (which includes all chapters under the section “Chemical or allied industries”). The category “Steel, other metals” comprises HTS chapters 72 through 81 (which includes most chapters under the section “Base metals and articles of base metals” except those chapters covering articles of base metals). The category “Agricultural products” comprises HTS chapters 1 through 24 (which includes all chapters under the sections “Live animals; animal products,” “Vegetable products,” “Animal or vegetable fats, etc.,” and “Prepared foodstuffs, beverages, spirits, and vinegar; tobacco and manufactured tobacco substitutes”). The category “Other products” comprises all other HTS chapters. We conducted our work from June 2005 through December 2005 in accordance with generally accepted government auditing standards. This appendix provides additional information on the antidumping (AD) duty rate data that we assembled for this report and provides some additional analytical information, including brief discussions of variation in the duty rates applied to China over time, Department of Commerce (Commerce) determinations on whether Chinese companies should be considered eligible for individual rates, and duty rates applied to selected market economy countries. The overall average duty rate for all 68 orders against China from 1980 through 2004 was 65 percent. This was the result of 72 country-wide rates (on 68 products) with an average duty of 111 percent and 158 individual company rates with an average duty of 44 percent. These rates ranged from zero to about 384 percent (see table 3). In our analysis, we identified 25 orders against China in which there was also an order against a market economy country on the same product put in place within 1 year from the order against China. Table 3 shows overall average duty rates from the 25 orders against China that were matched to market economy cases and the 43 orders in which no market economy order was identified. Table 4 at the end of this appendix provides information on each of the 68 orders against China, and table 5 provides comparative information for each of the 25 cases in which duties were also applied against market economy producers. About 78 percent (53 AD orders) of the 68 AD orders included not only country-wide rates but also individually calculated rates for at least one Chinese company. Of these, about 54 percent (37 orders) included company-specific rates that were lower than the country-wide rates imposed in the same cases. With regard to nonmalleable cast iron pipe fittings, for example, two Chinese companies submitted detailed information and met Commerce’s criteria for assignment of individually determined rates. Other Chinese pipe fitting companies, however, did not provide any information. Commerce assigned the two cooperating companies duty rates of between 6 and 8 percent—a fraction of the 76 percent country-wide duty rate applied in this case. Only 15 orders issued against China during this period included just a country-wide rate. Most of these orders date from the period before 1991 when Commerce had not yet begun applying its separate rates test. However, from 1991 through 1995 Commerce issued six orders that contained only a country-wide rate. In most of these cases, Chinese companies failed to respond to Commerce requests for information. For example, in one case Commerce solicited information through both the Chinese government and the relevant Chinese industry association. However, the industry association responded that no Chinese producer or exporter wanted to participate in Commerce’s investigation. Commerce, therefore, used facts available to establish a country-wide duty rate of about 156 percent. In 12 of the 68 orders, all the individual rates issued were equal to the country-wide rate. In some cases, Commerce specified an individual rate for one company and then used this rate as “facts available” to establish a country-wide duty rate at the same level. For example, in its investigation of refined brown aluminum oxide from China, Commerce requested information from the government of China and more than 20 Chinese companies. Only one of these companies responded. Commerce found that this company qualified for its own duty rate and determined that this rate should be about 135 percent. Commerce determined that the failure of the other companies to provide requested information justified application of an adverse inference to determine the country-wide rate. Since the rate established for the lone cooperating company was higher than any of the rates suggested in the petition requesting imposition of duties on this product, Commerce set the country-wide rate equal to the rate applied to the one cooperating company—135 percent. We found that there was a slight tendency for duty rates applied against Chinese products to rise over the period of our analysis, as well as to fluctuate over time. As figure 9 shows, individual company and country- wide duty rates tended to be larger from 1992-2004 than from 1980-1991. In addition, the individual company rates demonstrate a cyclical pattern over time. In our regression analysis, we found that there was a small positive trend in AD duty rates against China over time that was statistically significant. This result is consistent with research that has shown that overall U.S. AD margins have increased over time. Table 4 shows the duty rates on the 68 orders imposed on China between 1980 and 2004. Table 5 then shows the duty rates on the 25 orders imposed on China in which we also found matching orders imposed on market economies. In order to examine the difference between duty rates applied to China and those applied to market economy countries, we performed multivariate regression analyses on the cases in which the Department of Commerce (Commerce) applied duties to both China and at least one market economy country. These involved 25 different products, affected by 25 duty orders against China, and 54 duty orders against market economies. Multivariate regression analysis makes it possible to examine the simultaneous effect of several different factors on the duty rates and to determine the extent to which these factors, taken together, explain variation in these rates. To determine whether our analytical results for China held true for all nonmarket economy (NME) countries, we also identified six instances in which Commerce applied duties to a nonmarket economy other than China, and at least one market economy country, and reran our analyses using data for all 31 products. Table 6 shows the results of our multivariate regression analysis of variation in the dependent variable (the antidumping duty rate) attributable to the following independent variables: China (a variable indicating whether the AD duty rate is for China or not) the country-wide rate (a variable indicating whether the AD duty rate is a country-wide rate), and year (a variable indicating the year in which the duty went into affect). We also included a constant term. The regression involved 25 products covered by 25 orders against China and 54 orders against market economies and included a total of 243 duty rates (the dependent variable) from these 79 orders. The results show that the variable for China as the target country had a coefficient of 3.002 percent, indicating that duty rates against China tended to be about 3 percentage points higher than those against market economies, on average. However, this coefficient is not statistically significant, meaning that there was no statistically significant difference between the rates assigned to China and market economy countries, when the other factors in the regression are included. The coefficient for the country-wide rate, on the other hand, shows that there is a 52 percentage point difference between country-wide rates against China and other rates. This result is statistically significant at above the 99 percent level. The variable for the year of the order is also statistically significant, but it has a small coefficient. The adjusted R-square measure shows that about 15 percent of the overall variation in duty rates is explained by the independent variables included here. We then included additional variables for product groups, such as agriculture and steel, and, in separate regressions, individual product variables for each type of product. The additional variables generally improved the overall “fit” of the regression; the adjusted R-square measure with the individual product variables included showed that the regression explained between 24 and 31 percent of the overall variation in duty rates across the sample compared with 15 percent in the regression above. Also, certain types of products, such as agriculture products, tended to have higher duty rates relative to other types. Table 7 shows the regression results when individual product variables are included. Once again the coefficient for China is insignificant, while the coefficient for the country-wide rate is significant at the 99 percent level. Some coefficients for individual products are significant (e.g., carbon steel butt-weld pipe fittings), but many are not. The overall adjusted R-square measure shows that this regression model explains about 31 percent of total variation in the duty rates. In order to examine the effect of applying adverse inferences and facts available (other than adverse inferences) on the duty rates, we added additional variables indicating when Commerce used these approaches. The results show that application of adverse inferences is a significant variable and has a large effect on the duty rates, but that application of facts available (other than adverse inferences) is not. When adverse inferences are introduced, this results in the country-wide rate variable becoming insignificant (see table 8). However, this is likely due to the fact that the adverse inferences variable is highly correlated with the country- wide rate. Therefore, it is not surprising that the country-wide rate is no longer significant since the adverse inferences variable is already accounting for much of the variation. In addition, the variable for China once again becomes significant. As we discuss in the body of this report, Commerce uses adverse inferences in very few determinations for Chinese companies granted their own rates. Adverse inferences were applied in making only 3 out of the 50 individual determinations used in this analysis. However, Commerce used adverse inferences in nearly half of its determinations against individual market economy companies. Since adverse inferences are already factored into this model, as is the country-wide rate, the remaining differences accounted for by the China variable in table 3 are between individual (noncountry-wide) Chinese rates and individual market economy rates in which adverse inferences are not used. Table 8 shows that there is a statistically significant 27 percentage point difference between these rates. However, because there are methodological differences between the NME and market economy methodologies for individual companies, it is not clear how often adverse inferences would be used against individual Chinese companies should they move to a market economy methodology. In other words, we cannot predict the extent to which, under a market economy methodology, individual Chinese companies would cooperate with Commerce or Commerce would find it necessary to use adverse inferences in its determinations against Chinese companies. It is possible that some Chinese companies that currently have an individually determined rate under the NME methodology would face adverse inferences under a market economy methodology, whereas others would not. This could produce a result similar to the market economy cases we have examined in which the overall average (for example, 55 percent) is the result of some companies receiving comparatively high duty rates (e.g., 77 percent) when adverse inferences are used and others receiving comparatively low rates (e.g., 16 percent) when adverse inferences are not used (see table 2). In any case, these results show that there is a remaining difference between these two groups after accounting for the use of adverse inferences and the country-wide rate. In order to examine whether the above results hold for all NMEs, we ran the same regressions for a larger set of 31 products (compared with the 25 products above) in which we found matching cases between nonmarket economies other than China and market economies. The data set on these 31 products included rates from 128 orders (26 on China, 82 on market economies, and 20 on NMEs other than China) that contained 355 duty rates (dependent variable). These analyses confirmed our China-market economy only analyses but also showed that other NME countries tend to have duty rates that are statistically higher than market economy rates for this sample of matching cases. (Note that the number of additional products—six—is relatively small.) Controlling for both the NME designation and the country-wide rate, the NME designation itself is a significant variable at the 97 percent level of confidence with a coefficient of 23 percent (the coefficient for China is not statistically significant). The country-wide variable is also significant (99 percent level) and larger with a coefficient of 48 percent. As additional variables are added for individual products, the NME designation continued to be significant along with the country-wide rate variable. There may be other systematic factors not accounted for in this regression model that would explain some of the variability not accounted for by the variables we included. As shown in table 7, our model accounted for about 50 percent (half) of the variation in rates. Some of this variation may be idiosyncratic and related to differences in individual companies’ practices, other may relate to how Commerce has implemented its analysis. However, these unexplained factors do not appear to be systematically related to whether the case involved China or a market economy since the regression analysis already controls for that difference. The following are GAO’s comments on the Department of Commerce’s letter dated December 8, 2005. 1. We re-examined our data, making corrections as appropriate, and updated our analyses. The report reflects these corrections, though they did not have a significant impact on any of our findings. 2. As discussed in the report, the overall difference between the duty rates applied to China and those applied to market economy countries is largely explained by the application of comparatively high country- wide rates to China. Therefore, the model allows us to conclude that elimination of the NME methodology—and thus these country-wide rates—would result in lower duties for some Chinese companies. Nevertheless, there would still be variation in duty rates among companies and products due to a range of other factors. In addition to the individual named above, Adam R. Cowles, Monica Ghosh, R. Gifford Howland, Michael McAtee, Richard Seldin, Ross Tuttleman, Roberto Walton, and Timothy Wedding made significant contributions to this report.
U.S. companies adversely affected by unfair imports may seek a number of relief measures, including antidumping (AD) duties. The Department of Commerce (Commerce) classifies China as a nonmarket economy (NME) and uses a special methodology that is commonly believed to produce AD duty rates that are higher than those applied to market economies. Commerce may stop applying its NME methodology if it finds that China warrants designation as a market economy. In light of increased concern about China's trade practices, the conference report on fiscal year 2004 appropriations requested that GAO review efforts by U.S. government agencies responsible for ensuring free and fair trade with that country. In this report, the last in a series, GAO (1) explains the NME methodology, (2) analyzes AD duties applied to China and compares them with duties applied to market economies, and (3) explains circumstances in which the United States would stop applying its NME methodology to China and evaluates the potential impact of such a step. Commerce agreed with our findings, commenting that our report provides timely and helpful information on the NME methodology and its application to China. Commerce's methodology for calculating AD duties on nonmarket economy products differs from its market economy approach in that (1) since NME prices are unreliable, it uses price information from surrogate countries, like India, to construct the value of the imported products and (2) it limits eligibility for individual rates to companies that show their export activities are not subject to government control. Companies that do not meet the criteria or do not participate in Commerce investigations receive "country-wide" rates. China has been the most frequent target of U.S. AD actions. On 25 occasions, Commerce has applied duties to the same product from both China and one or more market economy. China (NME) duties were over 20 percentage points higher than those applied to market economies, on average. This is because average China country-wide rates were over 60 points higher than comparable market economy rates. Individual China company rates were similar to those assigned to market economy companies, on average. Commerce can declare China a market economy if the country meets certain criteria, thus ending the use of surrogate price information and country-wide rates in China AD actions. These changes would have a mixed impact. Duties would likely decline for Chinese companies not assigned individual rates. Individual company rates would likely diverge, with those that do not cooperate with Commerce receiving rates that are substantially higher than those that do cooperate. In any case, it appears that the actual trade impact of the NME methodology will decline as the portion of total export trade conducted by Chinese companies assigned individual rates increases and as the country-wide rates that largely account for the comparatively high average rates applied to China decline in importance.
To qualify for Medicaid coverage for long-term care, individuals must be within certain eligibility categories, such as children or those who are aged or disabled, and meet functional and financial eligibility criteria. Within broad federal standards, states determine if an individual meets the functional criteria by assessing limitations in an individual’s ability to carry out activities of daily living (ADL) and instrumental activities of daily living (IADL). The financial eligibility criteria are based on individuals’ assets—income and resources together. The Medicaid statute requires states to use specific income and resource standards in determining eligibility; these standards differ based on whether an individual is married or single. If a state determines that an individual has transferred assets for less than FMV, the individual may be ineligible for Medicaid coverage for long-term care for a period of time. Most individuals requiring Medicaid coverage for long-term care services become financially eligible for Medicaid in one of three ways: 1. Individuals who participate in the Supplemental Security Income (SSI) program, which provides cash assistance to aged, blind, or disabled individuals with limited income and resources, generally are eligible for Medicaid. 2. Individuals who incur high medical costs may “spend down” into Medicaid eligibility because these expenses are deducted from their income. Spending down may bring their income below the state- determined income eligibility limit. Such individuals are referred to as medically needy. As of 2000, 36 states had a medically needy option, although not all of these states extended this option to the aged and disabled or to those needing nursing home care. 3. Individuals can qualify for Medicaid if they reside in nursing facilities or other institutions in states that have elected to establish a special income level under which individuals with incomes up to 300 percent of the SSI benefit ($1,737 per month in 2005) are eligible for Medicaid. Individuals eligible under this option must apply all of their income, except for a small personal needs allowance, toward the cost of nursing home care. The National Association of State Medicaid Directors reported that, as of 2003, at least 38 states had elected this option. Medicaid policy bases its characterization of assets—income and resources—on SSI policy. Income is something, paid either in cash or in kind, received during a calendar month that is used or could be used to meet food or shelter needs; resources are cash or things that are owned that can be converted to cash. (Table 1 provides examples of different types of assets.) In establishing policy for determining financial eligibility for Medicaid coverage for long-term care, states can decide, within federal standards, which assets are countable or not. For example, states may disregard certain types or amounts of income and may elect not to count certain resources. In most states, to be financially eligible for Medicaid coverage for long- term care services, an individual must have $2,000 or less in countable resources ($3,000 for a couple). However, specific income and resource standards vary depending on the way an individual becomes eligible for Medicaid (see table 2). The Medicaid statute requires states to use specific minimum and maximum resource and income standards in determining eligibility when one spouse is in an institution, such as a nursing home, and the other remains in the community (referred to as the community spouse). This enables the institutionalized spouse to become eligible for Medicaid while leaving the community spouse with sufficient assets to avoid impoverishment. Resources. The community spouse may retain an amount equal to one- half of the couple’s combined countable resources, up to a state-specified maximum resource level. If one-half of the couple’s combined countable resources is less than a state-specified minimum resource level, then the community spouse may retain resources up to the minimum level. The amount that the community spouse is allowed to retain is generally referred to as the community spouse resource allowance. Income. The community spouse is allowed to retain all of his or her own income. States establish a minimum amount of income—a minimum needs allowance—that a community spouse is entitled to retain. Prior to the DRA, if the community spouse’s income was less than the minimum needs allowance, then states could allow the difference to be made up in one of two ways: by requiring the transfer of income from the institutionalized spouse (called the income-first approach) or by allowing the community spouse to keep resources above the community spouse resource allowance, so that the additional resources could be used to generate more income (the resource-first approach). Under the DRA, states must apply the income-first method. Federal law limits Medicaid payments for long-term care services for persons who transfer assets for less than FMV within a specified time period. As a result, when an individual applies for Medicaid coverage for long-term care, states conduct a review, or “look-back,” to determine whether the individual (or his or her spouse, if married) transferred assets to another person or party and, if so, whether the transfer was for less than FMV. If a transfer of assets for less than FMV is detected, the individual is ineligible for Medicaid coverage for long-term care for a period of time, called the penalty period. The penalty period is calculated by dividing the dollar amount of the assets transferred by the average monthly private-pay rate for nursing home care in the state (or the community, at the option of the state). For example, if an individual transferred $10,000 in assets, and private facility costs averaged $5,000 per month in the state, the penalty period would be 2 months. Federal law exempts certain transfers for less than FMV from the penalty provisions even if they are made within the look-back period. Exemptions include transfers of assets to the individual’s spouse, another individual for the spouse’s sole benefit, or a child who is considered to be disabled under federal law. Additional exemptions from the penalty provisions include the transfer of a home to an individual’s spouse, or minor or disabled child who meets certain criteria; an adult child residing in the home who has been caring for the individual for a specified time period; or a sibling residing in the home who meets certain conditions. Transfers do not result in a penalty if the individual can demonstrate to the state that the transfer was made exclusively for purposes other than qualifying for Medicaid. Additionally, a penalty would not be applied if the state determined that application of the penalty would result in an undue hardship, that is, it would deprive the individual of (1) medical care such that the individual’s health or life would be endangered or (2) food, clothing, shelter, or other necessities of life. Prior to the DRA, the look-back period for asset transfers was generally 36 months. If the state identified transfers for less than FMV during this period, then the state was required to impose a penalty period that began at approximately the date of the asset transfer. As a result, some individuals’ penalty periods had already expired by the time they applied for Medicaid coverage for long-term care and therefore they were eligible when they applied. The DRA modified some of the eligibility requirements for Medicaid coverage for long-term care, including provisions related to asset transfers, and introduced new requirements. Most, but not all, of these DRA provisions became applicable on the date the law was enacted, February 8, 2006. In general, these DRA provisions do not apply to transfers that occurred prior to the law’s enactment. The DRA extended the look-back period, changed the beginning date of the penalty period, and provided additional conditions on the application process for undue hardship waivers. (See table 3.) The DRA also introduced several new provisions, which are summarized in table 4. Nationwide, most elderly individuals had nonhousing resources valued under $70,000 at the time they entered the nursing home; nursing home care is estimated to cost over $70,000 a year for a private-pay patient. In general, Medicaid-covered elderly nursing home residents had lower nonhousing resources and income at the time of entry than non-Medicaid- covered residents. The percentage of Medicaid-covered elderly nursing home residents who reported transferring cash was lower and the median amounts they reported transferring were similar to those for non- Medicaid-covered residents. According to data from the HRS, nursing home residents covered by Medicaid had fewer assets than residents not covered by Medicaid. Over 70 percent of all elderly nursing home residents had nonhousing resources of $70,000 or less at the time they entered the nursing home, which is less than the estimated average annual cost for nursing home care. Median nonhousing resources for all elderly nursing home residents were $5,794 at the time they entered the nursing home. (See fig. 1.) Sixty-two percent of all elderly nursing home residents had nonhousing resources of $25,000 or less while 11 percent had nonhousing resources of $300,000 or above. Median nonhousing resources for Medicaid-covered elderly nursing home residents ($48) were lower than for non-Medicaid-covered residents ($36,123). Approximately 92 percent of Medicaid-covered residents had nonhousing resources of $25,000 or less compared to 46 percent of non- Medicaid-covered residents. Approximately 92 percent of all elderly nursing home residents had an annual income of $50,000 or less at the time they entered the nursing home; about 65 percent of elderly nursing home residents had incomes of $20,000 or less. Median annual income for elderly nursing home residents was $14,480 at the time of entry. (See fig. 2.) Median annual income of Medicaid-covered elderly nursing home residents ($9,719) was about half that of non-Medicaid-covered residents ($18,600). Approximately 90 percent of Medicaid-covered elderly nursing home residents had annual incomes of $20,000 or less compared to approximately 53 percent of non- Medicaid-covered residents. Nationwide, the percentage of Medicaid-covered elderly nursing home residents who reported transferring cash was about half that of non- Medicaid-covered residents at the time they entered the nursing home and during the 4 years prior to entry. For example, at the time they entered the nursing home, 9.2 percent of Medicaid-covered residents reported transferring cash, compared with 23.2 percent of non-Medicaid-covered residents. However, the median amount of cash transferred as reported by Medicaid-covered residents and non-Medicaid-covered residents did not vary greatly. (See table 5.) Similar to the nationwide results, the majority of the 540 applicants whose Medicaid nursing home application files we reviewed in selected counties in three states (Maryland, Pennsylvania, and South Carolina) had few nonhousing resources. The majority of applicants (approximately 65 percent) were single females. About 76 percent of all applicants were approved the first time they applied, while the remaining applicants (23 percent) were initially denied, often for financial reasons—having income or resources that exceeded the states’ financial eligibility standards. About three-quarters of the applicants initially denied only for financial reasons were subsequently approved, primarily after the value of their nonhousing resources decreased. For the applicants who were initially denied for financial reasons, the time span between their initial and subsequent applications averaged a little over 5 months. During this time, their median nonhousing resources decreased from $22,380 to $10,463, with a maximum decrease of $283,075. For about one-third of these applicants who were initially denied for financial reasons and were subsequently approved, at least part of the decrease in their nonhousing resources could be attributed to spending on medical or nursing home care. Of the 540 Medicaid nursing home application files we reviewed in selected counties in three states, about 75 percent of the applicants were female, most of whom were single. Over 80 percent of the applicants were already living in a long-term care facility. These individuals had been living in facilities for an average of a little over 4 months at the time of application. About 90 percent—488 applicants—had total nonhousing resources of $30,000 or less. (See fig. 3.) Eleven percent—59 applicants— did not have any nonhousing resources, while about 5 percent had total nonhousing resources of $60,000 or more. For all applicants whose files we reviewed, median nonhousing resources were $3,365. Married applicants, who made up about 21 percent of the applicants, had higher median nonhousing resources ($8,407) than single applicants. Of the single applicants, females, who made up approximately 65 percent of all applicants, had higher median nonhousing resources ($3,109) than males ($1,628), who made up about 14 percent of all applicants. Eighty-five percent of the Medicaid applicants whose files we reviewed (459 applicants) had annual incomes of $20,000 or less. The median annual income of all applicants was $11,382. (See fig. 4.) Single male applicants generally had higher annual incomes than single females. Applicants had several different types of nonhousing resources, some of which were not counted toward determining eligibility for Medicaid coverage for nursing home care. For example, a little over half (53 percent) of all applicants whose files we reviewed had prepaid burial or funeral arrangements, with a median value of $2,614. Additionally, about 38 percent of the applicants had life insurance. Whether the burial arrangements or life insurance policies counted toward determining Medicaid eligibility depended on their type and value as well as the state in which the applicant applied. Of the 540 applicants whose files we reviewed, 137 applicants (25 percent) owned homes and 83 of the home owners (about 61 percent) were single. Based on the applications we reviewed, home ownership varied by state, with 32 percent of the applicants we reviewed in selected counties in South Carolina owning homes, compared with 28 percent and 16 percent in Pennsylvania and Maryland, respectively. For the 112 applicants in all selected counties for whom we were able to determine a value for their homes, the median value was $52,954. About 76 percent of the Medicaid applicants whose files we reviewed were approved upon initial application (408 applicants), while 23 percent (122 applicants) were denied. The majority of the approved applicants were single and female. Of the 122 applicants who were initially denied, 57 were approved upon submitting a subsequent application. Therefore, 465 applicants, or 86 percent of all applicants whose files we reviewed, were eventually approved. Figure 5 provides a breakdown of applicants by application status. Almost half of the denied applicants (56 of 122) were denied only for financial reasons—having income or resources that exceeded the standards, most having to do with resources exceeding the standards. For those applicants who were denied for having excess resources, their resources exceeded the standards by an average of $25,116; the median amount of excess resources was $13,260. Other reasons for denial included failing to provide the requested documentation, not being in a nursing home or meeting functional eligibility criteria, or a combination of two or more of these reasons. (See fig. 6.) Of the 56 applicants who were initially denied only for financial reasons, 41 (73 percent) reapplied and were later approved. The time span between their initial and subsequent applications averaged a little over 5 months and ranged from less than 1 month to 31 months. Of the 41 applicants who were initially denied only for financial reasons and were subsequently approved, their nonhousing resources generally decreased between the initial and subsequent applications, while their annual incomes stayed about the same. (See fig. 7.) Between the two applications, median nonhousing resources decreased from $22,380 to $10,463, with a maximum decrease of $283,075. For most of these applicants, the overall decrease in nonhousing resources was specifically due to a decrease in financial holdings such as checking or savings accounts, stocks, and mutual funds. For example, a married applicant initially applied and was denied for having countable resources that exceeded the state standards by $51,213. The applicant applied again just over 9 months later and had resources within the state standards. Therefore, the applicant was approved. Some of the files of applicants who were initially denied for financial reasons and were subsequently approved indicated that the applicants spent at least some of their resources on medical expenses or nursing home care, although this was not the case for all of them. In the files we reviewed for 13 of these applicants (32 percent), there were indications that the applicant had spent at least some of his or her resources on medical expenses, nursing home care, or both. For example, one applicant sold stock and received cash in exchange for a life insurance policy, spending about $12,150 for 3 more months of nursing home care before being approved for Medicaid. In the remaining 28 applicants’ files (68 percent), there was no indication that their resources were used for medical or nursing home care. For example, one married applicant was initially denied for having resources of $205,440 above the state’s standard. The file indicated that when the applicant reapplied and was approved about 6 months later, $140,000 of the applicant’s resources was used to purchase an annuity to create an income stream for the community spouse, which was not counted toward the applicant’s eligibility. Few of the approved applicants whose files we reviewed in selected counties in three states were found to have transferred assets for less than FMV during the 36-month look-back period, and those who did transfer assets for less than FMV rarely experienced a delay in eligibility for Medicaid coverage for nursing home care as a result. The proportion of approved applicants found to have transferred assets for less than FMV varied both within and among the three states, and the variation may be due, in part, to counties’ or states’ Medicaid application review procedures. At the time these applicants applied for Medicaid—state fiscal year 2005 or earlier—none of the three states reviewed imposed penalties for partial months, and the penalty period began at the time of the asset transfer; under these circumstances, only two of the applicants received a penalty that delayed their eligibility for Medicaid coverage for nursing home care as a result of transferring assets for less than FMV. The other applicants were either not assessed a penalty, because the penalty would have been for less than 1 month of coverage, or the penalty they were assessed had expired by the time they submitted their Medicaid application. Thus, these applicants did not experience a delay in their Medicaid coverage as a result of transferring assets for less than FMV. The total amount of assets transferred for less than FMV varied by applicant, as did the number of transfers each applicant made. In terms of the kinds of assets transferred for less than FMV, applicants most commonly transferred financial holdings such as cash or stocks, and their children or grandchildren were the most common recipients of the transfer. Of the 465 approved applicants whose files we reviewed from selected counties in three states, the files for 47 applicants (10 percent) indicated that the applicants had transferred assets for less than FMV during the 36-month look-back period. The proportion of approved applicants found to have transferred assets for less than FMV varied both within and among the states reviewed, ranging from a high of approximately 24 percent of approved applicants in Orangeburg County, South Carolina, to a low of approximately 4 percent in Allegheny County, Pennsylvania (see table 6). The variation in the proportion of applicants who were identified as having transferred assets for less than FMV may be due, in part, to states’ ability to identify transfers not reported by the applicant. About half of the assets transferred for less than FMV by applicants in South Carolina were identified by the eligibility workers as opposed to being reported by an applicant. Eligibility workers in Maryland and Pennsylvania identified 9 percent and 4 percent of transfers, respectively. The approved applicants who transferred assets for less than FMV were predominately single females. Although single females accounted for 65 percent of approved applicants, they accounted for over 78 percent of the approved applicants who transferred assets for less than FMV. (See fig. 8.) Additionally, 89 percent of approved applicants who transferred assets for less than FMV resided in a long-term care facility before applying for Medicaid. These individuals were in the facility for an average of over 5 months before they applied for Medicaid coverage. Approved applicants who transferred assets for less than FMV were better off financially (i.e., they had higher income and resources), even after excluding the amount transferred, compared with the universe of approved applicants. For example, approved applicants who transferred assets had higher median nonhousing resources ($8,138) compared with all approved applicants ($2,940). (See fig. 9.) Transfers for less than FMV rarely led to delays in eligibility for Medicaid coverage for nursing home care, as most applicants’ assessed penalty periods expired before they applied for Medicaid. Among the 47 approved applicants who transferred assets for less than FMV, the length of the penalty period assessed averaged about 6 months, with a median penalty period of 2 months. (See fig. 10.) At the time these applicants applied for Medicaid (state fiscal year 2005 or earlier), the three states in which we reviewed applications did not assess penalties for partial months; that is, the length of penalties assessed was rounded down to the closest whole month. As a result, 9 of the 47 approved applicants who transferred assets for less than FMV (about 19 percent) were not assessed a penalty because they transferred assets valued at less than the cost of a month of nursing home coverage for a private-pay patient in their state. Furthermore, because penalty periods began at approximately the date of the asset transfer, 36 applicants’ penalty periods expired prior to the submission of their application for Medicaid coverage for nursing home care. Thus, only 2 applicants experienced delays in Medicaid coverage resulting from their transfers of assets for less than FMV; the delays were for 1 and 6 months, respectively. Among those who transferred assets for less than FMV, the total amount of the assets transferred varied, with a median amount of $15,152. The applicant with the lowest total transfer amount made a onetime cash gift of $1,000 to her child, while the applicant with the highest total transfer amount used funds from a trust established for her care to buy and resell property. Since the trust fund should have only been used for the applicant’s care, the use of the funds to pay real estate fees, which totaled $201,516, was considered a transfer of assets for less than FMV. Figure 11 shows the distribution of the amounts of transfers for less than FMV per approved applicant. Nearly half of the applicants who transferred assets for less than FMV (22 of 47) transferred $10,000 or less; 10 of the 22 applicants transferred $5,000 or less. In contrast, 6 of the 47 applicants (about 13 percent) transferred more than $80,000 in assets. The number of transfers for less than FMV made by applicants also varied, averaging slightly over two transfers per applicant. Specifically, 23 applicants made a single transfer and 1 applicant made eight transfers (see fig. 12). The eight transfers spanned a 1½-year period and ranged from an over $4,000 cash gift to a grandchild to a stock transaction in which the applicant gave a relative over $33,000 of her stock. The majority of asset transfers for less than FMV (approximately 84 percent) involved the transferring of financial holdings such as cash or stocks. However, the types of assets transferred varied by state (see table 7). This variation may be related, in part, to differences in counties’ or states’ Medicaid application review procedures. Specifically, based on our review of the files, county officials in South Carolina conducted searches of real property tax databases, which likely allowed South Carolina eligibility workers to identify property transfers that were not reported by the applicant. For example, a South Carolina applicant was penalized because the eligibility worker identified that the applicant had transferred property for less than FMV—a house valued at $84,700 to her son for $5. In contrast, although Maryland eligibility workers could search the state’s property tax records, state officials told us that workers’ searching abilities were limited because they needed to know the county and street name of the property. As a result, it likely would be difficult for Maryland eligibility workers to identify unreported transfers of property. Applicants most frequently transferred assets to their children and grandchildren. Approximately 47 percent of transferred assets were given to children or grandchildren, 15 percent were given to other relatives, and 38 percent were given to other individuals. The extent to which some DRA long-term care provisions may affect applicants’ eligibility for Medicaid coverage for long-term care is uncertain. Our review of a sample of Medicaid applications indicated that the DRA penalty period provisions could increase the likelihood that individuals who transfer assets for less than FMV on or after the date of enactment will experience a delay in eligibility for Medicaid coverage for long-term care. However, the extent of the delay is uncertain. The effects on eligibility of other DRA provisions—specifically those related to annuities, home equity, the allocation of assets to community spouses, and life estates—may be limited because they only apply to a few applicants, affect applicants in some states but not in others, or both. The DRA requires states to change when a penalty period is applied and how it is calculated. First, the DRA changes the beginning date of a penalty period from approximately the date of the transfer—which could precede the date of a Medicaid application by days, months, or years—to the later of (1) generally the first day of a month during or after which an asset has been transferred for less than FMV or (2) the date on which the individual is eligible for Medicaid and would otherwise be receiving coverage for long-term care services, were it not for ineligibility due to the imposition of the penalty period. All applicants who transfer assets for less than FMV during the look-back period on or after February 8, 2006 (the date the DRA was enacted) will experience a delay in eligibility for Medicaid coverage for long-term care, whereas before that date, some applicants’ penalty periods expired before they applied for Medicaid coverage. Second, regarding the calculation of the penalty period, the DRA prohibits states from “rounding down” or disregarding fractional periods of ineligibility when determining the penalty period. This provision could result in longer penalty periods for some applicants. (See fig. 13, which illustrates the potential effects of the DRA penalty period provisions.) If these DRA penalty period provisions had been in effect for the applicants whose files we reviewed, all 47 approved applicants who transferred assets for less than FMV would have experienced a delay in Medicaid coverage, compared with only 2 who actually experienced a delay. Additionally, the penalty period would have been longer for 45 of the 47 approved applicants. The increase in the penalty period would have ranged from less than 1 day to almost 6 months, with a median increase of about 2½ weeks. As a result, the median delay in eligibility would have been approximately 3 months and ranged from about 1 week to over 47 months. An increase in the number of applicants whose eligibility is delayed may be mitigated by two factors. First, states may see an increase in the number of approved applicants seeking to waive their penalty periods because they would create an undue hardship—that is, the application of the penalty would deprive the applicants of (1) medical care that would endanger the applicants’ health or life or (2) food, clothing, shelter, or other necessities of life. Officials from the three states in which we reviewed applications commented that they received few undue hardship requests prior to the DRA but expected to see an increase in requests as the DRA provisions are implemented. Second, the extent to which individuals are subject to penalty periods may change as individuals may make different decisions regarding the transferring of assets as a result of the DRA. The effects on eligibility for Medicaid coverage for long-term care of other DRA provisions may be limited. This is primarily because few Medicaid applicants appear to have resources that are specifically addressed by the DRA, namely annuities, home equity of more than $500,000, or life estates. Additionally, the provision on allocating income and resources to the community spouse will only affect married applicants in certain states, thus limiting the effects that the DRA might have on eligibility. Annuities. The DRA added requirements for states regarding the treatment of annuities. A state must treat the purchase of an annuity as a transfer for less than FMV unless certain conditions, such as a requirement that the state be named as a remainder beneficiary, are met. However, the effect of this provision may be limited because few Medicaid applicants appear to have annuities. We found that 3 percent of the approved applicants (14 of 465) whose application files we reviewed owned an annuity. These 14 applicants’ annuities would have been considered transfers for less than FMV under the DRA because they did not name the state as a remainder beneficiary, had a balloon payment, or both. While the incidence of annuities among Medicaid beneficiaries is not nationally known, a January 2005 study undertaken at the request of CMS estimated that, among the five states examined, the percentage of Medicaid long- term care beneficiaries who had an annuity ranged from less than 1 percent in two states to more than 3 percent in one state. Home Equity. Under the DRA, certain individuals with home equity greater than $500,000 are not eligible for Medicaid payment for long-term care, including nursing home care. The effect of this provision may be limited because it appears that few individuals who apply for Medicaid coverage for nursing home care have homes valued at more than $500,000. For example, 23 percent of the 465 approved Medicaid nursing home applicants whose files we reviewed owned homes. Of the homes for which we could determine values, the median value was $57,600. Only one approved applicant owned a home valued at more than $500,000. Although we do not know this applicant’s equity interest in the home, the applicant would not have been subject to the DRA home equity provision, since the applicant’s spouse lived in the home. Additionally, our review of 2004 HRS data indicated that no elderly nursing home residents owned a home valued at more than $500,000. Life Estates. The DRA requires states to treat the purchase of certain life estates as a transfer of assets for less than FMV unless the purchaser (the applicant) lived in the house for at least 1 year after the date of purchase. The effect of this provision may be limited because we found that few approved Medicaid nursing home applicants whose files we reviewed had life estates. Specifically, the proportion of approved applicants who owned life estates ranged from zero in Pennsylvania to 2 percent in South Carolina. Income First. The DRA’s income-first provision has the potential to affect married applicants in states that did not already use the income-first methodology. Under the income-first methodology the difference between a community spouse’s income and his or her minimum needs allowance is made up by transferring income from the institutionalized spouse. According to CMS, approximately half of all states did not use the income- first methodology before the passage of the DRA. Of the three states we reviewed, only Pennsylvania will be affected by this provision. Among approved applicants in Pennsylvania, 6 of the 42 married applicants whose files we reviewed would have been affected by this change because these applicants had retained resources in excess of the standards to create income streams for their community spouses. Specifically, they created annuities for the community spouses with values ranging from $7,372 to $77,531, with a median value of $39,912. Pennsylvania officials told us that almost all institutionalized spouses in their state have enough income to supplement the income needs of their community spouses. As a result, under the DRA, applicants would not be allowed to retain resources in excess of the standards as they had previously through the creation of annuities. Rather, resources in excess of those allowed by the Medicaid program would need to be reduced in order for the institutionalized spouse to be eligible for Medicaid. We provided copies of a draft of this report to CMS and the three states in which we reviewed Medicaid nursing home application files: Maryland, Pennsylvania and South Carolina. We received written comments from CMS (see app. II) and South Carolina (see app. III). Maryland provided comments via e-mail, while Pennsylvania did not comment on the draft report. In its written comments, CMS generally agreed with our findings, but noted the limited number of states in which we reviewed applications and that study was done before the effects of the DRA could be assessed. We agree that the actual effects of the DRA are not yet known. However, our findings based on applications submitted prior to the implementation of the DRA provide insight into what its effects may be. CMS also commented that the DRA will be working as Congress intended if applicants experience delays in Medicaid eligibility as a result of transferring assets for less than FMV. Maryland and South Carolina generally agreed with our findings. In addition, Maryland emphasized the difficulties faced by Maryland eligibility workers in identifying unreported transfers of assets due to their limited ability to search the state’s property tax records. South Carolina highlighted our finding that 15.6 percent of the approved applicants whose files we reviewed in South Carolina were found to have transferred assets for less than FMV, as compared to 10.4 percent and 5.4 percent in the other two selected states. The state attributed this difference to the effectiveness of South Carolina’s eligibility process and its training of eligibility workers to enable them to identify transfers of assets not reported by an applicant. In response to our finding that only 2 of the 47 approved applicants who transferred assets for less than FMV experienced a delay in Medicaid eligibility as a result of transferring assets, South Carolina recommended that we clarify that this occurred despite the fact that the states were adhering to federal requirements. We did not make a change, as we believe the report clearly states why the other applicants did not experience a delay in Medicaid eligibility. Technical comments from CMS were incorporated into the report as appropriate. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies of this report to the Administrator of the Centers for Medicare & Medicaid Services. We will also provide copies to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7118 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. To examine the financial characteristics of elderly nursing home residents nationwide, including the extent to which they transferred cash, we analyzed data from the Health and Retirement Survey (HRS). HRS is a longitudinal national panel survey of individuals over age 50 sponsored by the National Institute on Aging and conducted by the University of Michigan. HRS includes individuals who were not institutionalized at the time of the initial interview and tracks these individuals over time, regardless of whether they enter an institution. Researchers conducted the initial interviews in 1992 in respondents’ homes and follow-up interviews over the telephone every second year thereafter. HRS questions pertain to physical and mental health status, insurance coverage, financial status, family support systems, employment status, and retirement planning. For this analysis, we used HRS data from 1992 to 2004. We limited our analysis to elderly nursing home residents who had been surveyed at least once before they entered a nursing home. We defined an elderly individual as anyone 65 years of age or older. On the basis of individuals’ answers on HRS, we defined a nursing home resident as anyone who met one of the following three criteria: 1. answered “yes” to permanently living in a nursing home; 2. answered “no” to permanently living in a nursing home but spent more than 360 nights in a nursing home; or 3. answered “no” to permanently living in a nursing home but spent 180 to 360 days in one and a. died in a later survey period; b. had three or more limitations in activities of daily living (ADL); or c. had cancer, lung disease, or heart disease and some difficulty (rating of three or more) with mobility. We used the HRS data from the 1,296 individuals who met these criteria; this sample represented a population of 4,217,795 individuals. From these data, we estimated the financial characteristics of elderly nursing home residents as well as the percentage of residents who transferred cash or deeds to their homes, the amount transferred, and whether it varied by how they paid for their care (i.e., Medicaid-covered or non-Medicaid- covered). This analysis underestimates the percentage of elderly households that transferred assets and the amount of assets transferred because HRS data included only transfers of cash and deeds to the home. Additionally, HRS does not assess whether the transfers relate to individuals’ attempts to qualify for Medicaid coverage for nursing home services. In order to assess the reliability of the HRS data, we reviewed related documentation regarding the survey and its methods of administration. We also conducted electronic data tests to determine whether there were missing data or obvious errors. On the basis of this review, we determined that the data were sufficiently reliable for our purposes. To analyze the demographic and financial characteristics of elderly individuals who applied for Medicaid coverage for nursing homes and if they applied more than once, as well as the extent to which they transferred assets for less than fair market value (FMV) and were subject to penalty periods, we reviewed Medicaid eligibility determination practices and Medicaid nursing home application files in three states. To select states, we assessed the ranking of five factors for each of the 51 states. 1. The percentage of the population aged 65 and over, which we determined using 2000 census data from the U.S. Census Bureau. 2. The cost of a nursing home stay for a private room for a private-pay patient based on data from a 2004 survey conducted for the MetLife Company. 3. The proportion of elderly (aged 65 and over) with incomes at or above 250 percent of the U.S. poverty level, which was based on information from the U.S. Census Bureau using the 2000 and 2002 Current Population Surveys. 4. The extent of Medicaid nursing home expenditures as reported by states to the Centers for Medicare & Medicaid Services (CMS). 5. The availability of legal services specifically to meet the needs of the elderly and disabled, based on membership data from the National Academy of Elder Law Attorneys. For each factor, we ranked the states from low to high (1 to 51) and then summed the five rankings for each state. On the basis of these sums, we grouped the states into three clusters (low, medium, and high), using natural breaks in the data as parameters (see table 8). We judgmentally selected one state from each cluster. In making this selection, we excluded some states, such as states that did not have the technical ability to generate the data needed to select Medicaid nursing home application files for review. The states we selected were South Carolina (low), Maryland (medium), and Pennsylvania (high). To choose counties in our selected states, we considered four factors. 1. Number of individuals aged 65 and over who applied for, or were enrolled in, Medicaid coverage for nursing home services. 2. Number of licensed nursing home beds. 3. Population aged 65 and over. 4. Median and range of household income. For the first three factors, we ranked the counties within each selected state from high to low. Separately, we ranked the counties by median household income and split them into low, medium, and high groups, using natural breaks in the data as parameters. Of the counties that appeared in the top 10 ranking of each of the first three factors, we matched them with their respective median household income groups. Based on this assessment, we chose a county from each median household income group for each of the three states (see table 9). We reviewed a total of 180 nursing home application files in each selected state, for a total of 540 files. Within each selected state, we based the number of application files reviewed in each county on the proportion of the county’s population of individuals aged 65 and over. (See table 10.) Each selected state sent us a list of individuals aged 65 or over who submitted an application for Medicaid coverage for nursing home care during state fiscal year 2005. These lists also included individuals who applied in previous years but whose files had activity during fiscal year 2005. For example, an individual may have applied in state fiscal year 2004, but had his or her application approved in state fiscal year 2005. From the lists provided by the states, we randomly selected application files by unique identifying numbers. In order to compensate for application files that would need to be skipped because they did not meet our criteria or lacked adequate information, we requested additional files (10 to 15 percent) in each county. Therefore, when we determined that an application file was unusable, we included the next application file on our randomly generated list. We established a file review protocol whereby we reviewed and recorded the earliest Medicaid application for nursing home services in each file regardless of the date of the application. If the earliest application was denied, then we recorded data from that application as well as data from the earliest subsequently approved application, if there was one. From each application, we collected and analyzed data on the applicants’ demographic characteristics, income, nonhousing resources, and home value. We also collected and analyzed data on the number of applicants who transferred assets for less than FMV and the amount they transferred. Since the selected counties used the information in these application files to determine eligibility for Medicaid coverage for nursing home services, we did not independently verify the accuracy of the information contained in the files. However, to ensure that the information we entered into our data collection instrument was consistent with the information found in the application files, we conducted independent file verifications, which resulted in a total verification of at least 20 percent of entries. Additionally, we conducted electronic tests of the data collected to determine whether there were missing data or obvious errors. In some cases, we combined variables to create new ones. For example, we collected and identified several types of applicant resources but ultimately combined them into two categories—housing and nonhousing resources. Based on these procedures, we determined that the data were sufficiently reliable. Moreover, these data can be generalized to the individual county level but cannot be generalized to the state or national level. To assess the potential effect of provisions of the DRA, we used (1) HRS data and (2) data from our application file reviews. Specifically, we used 2004 HRS data to identify the number of elderly individuals in nursing homes who had houses in excess of $500,000 and could be affected by the DRA home equity provision. Additionally, we used the data from our review of Medicaid application files in three counties in each of the three states to analyze the potential effects of the DRA provisions pertaining to penalty periods, annuities, home equity, and income-first. We performed our work from October 2005 through January 2007 in accordance with generally accepted government auditing standards. In addition to the contact named above Carolyn Yocom, Assistant Director; Kaycee Misiewicz Glavich; Grace Materon; Kevin Milne; Elizabeth T. Morrison; Daniel Ries; Michelle Rosenberg; Laurie Fletcher Thurber; and Suzanne M. Worth made key contributions to this report.
The Medicaid program paid for nearly one-half of the nation's total long-term care expenditures in 2004. To be eligible for Medicaid long-term care, individuals may transfer assets (income and resources) to others to ensure that their assets fall below certain limits. Individuals who make transfers for less than fair market value (FMV) can be subject to a penalty that may delay Medicaid coverage. The Deficit Reduction Act of 2005 (DRA) changed the calculation and timing of the penalty period and set requirements for the treatment of certain types of assets. GAO was asked to provide data on the extent to which asset transfers for less than FMV occur. GAO examined (1) the financial characteristics of elderly nursing home residents nationwide, (2) the demographic and financial characteristics of a sample of Medicaid nursing home applicants, (3) the extent to which these applicants transferred assets for less than FMV, and (4) the potential effects of the DRA provisions related to Medicaid eligibility for long-term care. GAO analyzed data from the Health and Retirement Study (HRS), a national panel survey, and from 540 randomly selected Medicaid nursing home application files from 3 counties in each of 3 states (Maryland, Pennsylvania, and South Carolina). State and county selections were based on the prevalence of several factors, including population, income, and demographics. Nationwide, HRS data showed that, at the time most elderly individuals entered a nursing home, they had nonhousing resources of $70,000 or less--less than the average cost for a year of private-pay nursing home care. Overall, nursing home residents covered by Medicaid had fewer nonhousing resources and lower annual incomes, and were less likely to have reported transferring cash than non-Medicaid-covered nursing home residents. Similar to the nationwide results, GAO's review of 540 Medicaid nursing home applications in three states showed that over 90 percent of the applicants had nonhousing resources of $30,000 or less and 85 percent had annual incomes of $20,000 or less. One-fourth of applicants owned homes, with a median home value of $52,954. Over 80 percent of applicants had been living in long-term care facilities for an average of a little over 4 months at the time of their application. Of the 540 applicants, 408 were approved for Medicaid coverage for nursing home services the first time they applied and 122 were denied. Of the denied applicants, 56 were denied for having income or resources that exceeded the standards, 41 of whom submitted subsequent applications and were eventually approved, primarily by decreasing the value of their nonhousing resources. For about one-third of these applicants, at least part of the decrease in nonhousing resources could be attributed to spending on medical or nursing home care. Approximately 10 percent of approved applicants in the three states (47 of 465) transferred assets for less than FMV, with a median amount of $15,152. The average length of the penalty period assessed for the 47 applicants was about 6 months. However, only 2 of these applicants experienced a delay in Medicaid eligibility as a result of the transfers because many applicants' assessed penalties had expired by the time they applied for coverage. The extent to which DRA long-term care provisions will affect applicants' eligibility for Medicaid is uncertain. DRA provisions regarding changes to penalty periods could increase the likelihood that applicants who transfer assets for less than FMV will experience a delay in Medicaid eligibility, but the extent of the delay is uncertain. Several factors could affect the extent to which DRA penalty period provisions actually delay eligibility for Medicaid. These factors include whether an applicant transferred assets for less than FMV before or after the DRA was enacted and a potential increase in requests for waived penalty periods due to undue hardship--circumstances under which individuals are deprived of medical care, food, clothing, shelter, or other necessities of life. Other DRA provisions may have limited effects on eligibility. For example, provisions pertaining to home equity may have limited impact because few applicants whose files GAO reviewed had home equity of sufficient value to be affected. CMS, Maryland, and South Carolina generally agreed with the report's findings; Pennsylvania did not provide comments.
Individuals who are eligible for Medicare automatically receive Hospital Insurance (HI), known as part A, which helps pay for inpatient hospital, skilled nursing facility, hospice, and certain home health care services. Beneficiaries pay no premium for this coverage but are liable for required deductibles, coinsurance, and copayment amounts. (See table 1.) Medicare eligible beneficiaries may elect to purchase Supplementary Medical Insurance (SMI), known as part B, which helps pay for selected physician, outpatient hospital, laboratory, and other services. Beneficiaries must pay a premium for part B coverage, currently $50 per month. Beneficiaries are also responsible for part B deductibles, coinsurance, and copayments. Most Medicare beneficiaries have some type of supplemental coverage to help pay for Medicare cost-sharing requirements as well as some benefits not covered by Medicare. They obtain this coverage either through employers, Medicare+Choice plans, state Medicaid programs, or Medigap policies sold by private insurers. About one-third of Medicare’s 39 million beneficiaries have employer- sponsored supplemental coverage. These benefits typically pay for some or all of the costs not covered by Medicare, such as coinsurance, deductibles, and prescription drugs. However, many beneficiaries do not have access to employer-sponsored coverage. A recent survey found that more than 70 percent of large employers with at least 500 employees did not offer these health benefits to Medicare-eligible retirees. Small employers are even less likely to offer retiree health benefits. Approximately 15 percent of Medicare beneficiaries enroll in Medicare+Choice plans, which include health maintenance organizations and other private insurers who are paid a set amount each month to provide all Medicare-covered services. These plans typically offer lower cost-sharing requirements and additional benefits compared to Medicare’s traditional fee-for-service program, in exchange for a restricted choice of providers. However, Medicare+Choice plans are not available in all parts of the country. As of February 2001, about a third of all beneficiaries lived in counties where no Medicare+Choice plans were offered. About 17 percent of Medicare beneficiaries receive assistance from Medicaid, the federal-state health financing program for low-income aged and disabled individuals. All Medicare beneficiaries with incomes below the federal poverty level can have their Medicare premiums and cost sharing paid for by Medicaid. Beneficiaries with incomes slightly above the poverty level may have all or part of their Medicare premium paid for by Medicaid. Also, some low-income individuals may be entitled to full Medicaid benefits (so called “dual eligibles”), which include coverage for certain services not available through Medicare, such as outpatient prescription drugs. However, the income level at which beneficiaries qualify for full Medicaid benefits varies, as determined by each state, and many Medicare beneficiaries with low incomes may not qualify. Medigap is the only supplemental coverage option available to all beneficiaries when they initially enroll in Medicare at age 65 or older. Medigap policies are offered by private insurance companies in accordance with state and federal insurance regulations. In 1999, more than 10 million individuals—more than one-fourth of all beneficiaries— were covered by Medigap policies. The Omnibus Budget Reconciliation Act (OBRA) of 1990 required that Medigap policies be standardized and allowed a maximum of 10 different benefit packages offering varying levels of supplemental coverage to be provided. All policies sold since July 31, 1992 have offered one of the 10 standardized packages, known as plans A through J. (See table 2.) Policies sold prior to this time were not required to comply with the standard benefit package requirements. Under OBRA 1990, Medicare beneficiaries are guaranteed access to Medigap policies within 6 months of enrolling in part B regardless of their health status. Subsequent laws have added guarantees for certain other beneficiaries. Beneficiaries who enrolled in a Medicare+Choice plan when first becoming eligible for Medicare and then leave the plan within one year are also guaranteed access to any Medigap policy; those who terminated their Medigap policy to join a Medicare+Choice plan can return to their previous policy or, if the original policy is not available, be guaranteed access to plans A, B, C, or F. Also, individuals whose employers eliminate retiree benefits or whose Medicare+Choice plans leave the program or stop serving their areas are guaranteed access to these 4 standardized Medigap policies. However, none of these 4 guaranteed policies include prescription drug coverage. Otherwise, insurers can either deny coverage or charge higher premiums to beneficiaries who are older or in poorer health. Medicare’s design has changed little since its inception 35 years ago, and in many ways has not kept pace with changing health care needs and private sector insurance practices. Medicare cost-sharing requirements are not well designed to discourage unnecessary use of services. At the same time, they can create financial barriers to care. In addition, the lack of a cost-sharing limit can leave some beneficiaries with extensive health care needs liable for very large Medicare expenses. Moreover, gaps in Medicare’s benefit package can contribute to substantial financial burdens on beneficiaries who lack supplemental insurance or Medicaid coverage. Health insurers commonly design cost-sharing provisions—in the form of deductibles, coinsurance, and copayments—to ensure that beneficiaries are aware there is a cost associated with the provision of services and to encourage them to use services prudently. Ideally, cost sharing should encourage beneficiaries to evaluate the need for discretionary care but not discourage necessary care. Optimal cost-sharing designs would generally require coinsurance or copayments for services that may be discretionary and could potentially be overused, and would also aim to steer patients to lower cost or better treatment options. Care must be taken, however, to avoid setting cost-sharing amounts so high as to create financial barriers to necessary care. The benefit packages of Medicare+Choice plans illustrate cost-sharing arrangements that have been designed to reinforce cost containment and treatment goals. Most Medicare+Choice plans charge a small copayment for physician visits ($10 or less) and emergency room services (less than $50). Relatively few Medicare+Choice plans charge copayments for hospital admissions. Plans that offer prescription drug benefits typically design cost-sharing provisions that encourage beneficiaries to use cheaper generic drugs or brand name drugs for which the plan has negotiated a discount. Medicare fee-for-service cost-sharing rules diverge from these common insurance industry practices in important ways. For example, as indicated in table 1, Medicare imposes a relatively high deductible for hospital admissions, which are rarely optional. In contrast, Medicare requires no cost sharing for home health care services, even though historically high utilization growth and wide geographic disparities in the use of such services have raised concerns about the potentially discretionary nature of some services. Medicare also has not increased the part B deductible since 1991. For the last 10 years the deductible has remained constant at $100 and has thus steadily decreased as a proportion of beneficiaries’ real income. Also unlike most employer-sponsored plans for active workers, Medicare does not limit beneficiaries’ cost-sharing liability, which can represent a significant share of their personal resources. Premiums, deductibles, coinsurance, and copayments that beneficiaries are required to pay for services that Medicare covers equaled an estimated 23 percent of total Medicare expenditures in 2000. The average beneficiary who obtained services in 1997 had a total liability of $1,451, consisting of $925 in Medicare copayments and deductibles in addition to the $526 in annual part B premiums required that year. The burden of Medicare cost sharing can be much higher, however, for beneficiaries with extensive health care needs. In 1997, the most current year of available data on the distribution of these costs, slightly more than 3.4 million beneficiaries (11.4 percent of beneficiaries who obtained services) were liable for more than $2,000. Approximately 750,000 of these beneficiaries (2.5 percent) were liable for more than $5,000, and about 173,000 beneficiaries (0.6 percent) were liable for more than $10,000. In contrast, private employer-sponsored health plans typically limit maximum annual out-of-pocket costs for covered services to less than $2,000 per year for single coverage. Medicare does not cover some services that are commonly included in private insurers’ benefit packages. The most notable omission in Medicare’s benefit package is coverage for outpatient prescription drugs. This benefit is available to most active workers enrolled in employer- sponsored plans. More than 95 percent of private employer-sponsored health plans for active workers cover prescription drugs, typically providing comprehensive coverage with relatively low cost-sharing requirements. Current estimates suggest that the combination of Medicare’s cost-sharing requirements and limited benefits leaves about 45 percent of beneficiaries’ health care costs uncovered. The average beneficiary in 2000 is estimated to have incurred about $3,100 in out-of-pocket expenses for health care— an amount equal to about 22 percent of the average beneficiary’s income. Some beneficiaries potentially face much greater financial burdens for health care expenses. For example, elderly beneficiaries in poor health and with no Medicaid or supplemental insurance coverage are estimated to have spent 44 percent of their incomes on health care in 2000. Low- income single women over age 85 in poor health and not covered by Medicaid are estimated to have spent more than half (about 52 percent) of their incomes on health care services. These percentages are expected to increase over time as Medicare premiums and costs for prescription drugs and other health care goods and services rise faster than incomes. While more than one-fourth of beneficiaries have Medigap policies to fill Medicare coverage gaps, these policies can be expensive and provide only limited protection from catastrophic expenses. Medigap drug coverage in particular offers only limited protection because of high cost sharing and low coverage caps. More than 10 million Medicare beneficiaries have Medigap policies to cover some potentially high costs that Medicare does not pay, including cost-sharing requirements, extended hospitalizations, and some prescription drug expenses. By offering a choice among standardized plans, beneficiaries can match their coverage needs and financial resources with plan coverage. Medigap policies are widely available to beneficiaries including those who are not eligible for or do not have access to other insurance to supplement Medicare, such as Medicaid or employer- sponsored retiree benefits. In fact, most Medicare beneficiaries who do not otherwise have employer-sponsored supplemental coverage, Medicaid, or Medicare+Choice plans purchase a Medigap policy, demonstrating the value of this coverage to the Medicare population. Medigap policies can be expensive. The average annual Medigap premium was more than $1,300 in 1999. Premiums varied widely based on the level of coverage purchased. Plan A, which provides the fewest benefits, was the least expensive with average premiums paid of nearly $900 per year. The most popular plans—C and F—had average premiums paid of about $1,200. The most comprehensive plans—I and J—were the most expensive, with average premiums around $1,700. (See table 3.) Premiums also vary widely across geographic areas and insurers. For example, average annual premiums in Massachusetts ($1,915) were 45 percent higher than the national average. While varying average premiums may reflect geographic differences in terms of use of Medicare and supplemental services and costs, beneficiaries in the same state may face widely varying premiums for a given plan type offered by different insurers. For example, in Nevada, plan A premiums for a 65-year-old ranged from $446 to as much as $1,004, depending on the insurer. Similarly, in Florida, plan F premiums for a 65-year-old male ranged from $1,548 to $2,123; and in Maine, plan J premiums ranged from $2,697 to $3,612. Medigap policies are becoming more expensive. One recent study reports that premiums for the three Medigap plan types offering prescription drug coverage (H, I, and J) have increased the most rapidly—by 17 to 34 percent in 2000. Medigap plans without prescription drug coverage rose by 4 to 10 percent in 2000. A major reason premiums are high is that a large share of premium dollars are used for administrative costs rather than benefits. More than 20 cents from each Medigap premium dollar is spent for costs other than medical expenses, including administration. Administrative costs are high, in part, because nearly three-quarters of policies are sold to individuals rather than groups. The share of premiums spent on benefits varies significantly among carriers. The 15 largest sellers of Medigap policies spent between 64 and 88 percent of premiums on benefits in 1999. The share of premiums spent on benefits is lower for Medigap plans than either typical Medicare+Choice plans or health benefits for employees of large employers. Also, 98 percent of Medicare fee-for-service funds are used for benefits. While Medigap policies cover some costs beneficiaries would otherwise pay out of pocket, Medigap policies have limits and can still leave beneficiaries exposed to significant out-of-pocket costs. Medigap prescription drug coverage in particular leaves beneficiaries exposed to substantial financial liability. Prescription drugs are of growing importance in medical treatment and one of the fastest growing components of health care costs. Medigap policies with a drug benefit are the most expensive yet the benefit offered can be of limited value to many beneficiaries. For example, Medigap policies offering drug coverage typically cost much more than policies without drug coverage—the most popular plan with prescription drug coverage (plan J) costs on average $450 more than the most popular plan without drug coverage (plan F)—although the benefit is at most $1,250 or $3,000, depending on plan type, and under the Medigap plan with the most comprehensive drug coverage, type J, a beneficiary would have to incur $6,250 in prescription drug costs to get the full $3,000 benefit, because of the plan’s deductible and coinsurance requirements. The high cost and limited benefit may explain why more than 90 percent of beneficiaries with one of the standardized Medigap plans purchased standard Medigap plans that do not include drug benefits. Further, Medicare beneficiaries who do not purchase Medigap policies when they initially enroll in part B at age 65 or older are not guaranteed access to the Medigap policies with prescription drug coverage in most states. Insurers may then either deny coverage or charge higher premiums, especially to Medicare beneficiaries with any adverse health conditions. The Medigap standard prescription drug benefit differs greatly from that typically offered by employer-sponsored plans for active employees or Medicare-eligible retirees. The Medigap prescription drug benefit has a $250 deductible, requires 50 percent coinsurance, and is limited to $1,250 or $3,000 depending on the plan purchased. In contrast, employer- sponsored plans typically require small copayments of $8 to $20 or coinsurance of about 20 to 25 percent, depending on whether the enrollees purchase generic brands, those for which the plan has negotiated a price discount, or other drugs. Further, few employer-sponsored health plans have separate deductibles or maximum annual benefits for prescription drugs. These plans may also offer enrollees access to discounted prices the plans have negotiated even when the beneficiary is paying the entire cost. Even though Medicare’s original design has been criticized as outmoded, it included various cost-sharing requirements intended to encourage prudent use of services. These requirements have also traditionally been features of private insurance. However, Medigap’s first-dollar coverage—the elimination of any deductibles or coinsurance associated with the use of specific services—undermines this objective. All standard Medigap plans cover hospital and physician coinsurance, while nearly all beneficiaries with standardized Medigap plans purchase plans covering the full hospital deductible, and most purchase plans covering the full skilled nursing home coinsurance and part B deductible. First-dollar coverage reduces financial barriers to health care, but it also diminishes beneficiaries’ sensitivity to costs and could thus increase unnecessary service utilization and total Medicare program costs. A substantial body of research clearly indicates that Medicare spends more on beneficiaries with supplemental insurance relative to beneficiaries who have Medicare coverage only. For example, an analysis of 1993 and 1995 data found that Medicare per capita expenditures for beneficiaries with Medigap insurance were from $1,000 to $1,400 higher than for beneficiaries with Medicare only. Medicare per capita spending on beneficiaries with employer-sponsored plans was $700 to $900 higher than for beneficiaries with Medicare only. Some evidence suggests that first-dollar, or near first-dollar, coverage may partially be responsible for the higher spending. For example, one study found that beneficiaries with Medigap insurance use 28 percent more medical services (outpatient visits and inpatient hospital days) relative to beneficiaries who did not have supplemental insurance, but were otherwise similar in terms of age, sex, income, education, and health status. Service use among beneficiaries with employer-sponsored supplemental insurance (which often reduces, but does not eliminate, cost sharing) was approximately 17 percent higher than the service use of beneficiaries with Medicare coverage only. Unlike Medigap policies, employer-sponsored supplemental insurance policies and Medicare+Choice plans typically reduce beneficiaries’ financial liabilities but do not offer first-dollar coverage. Although there is a wide variety in design of employer-sponsored insurance plans, many retain cost-sharing provisions. Medicare+Choice plans also typically require copayments for most services. Moreover, unlike the traditional fee- for-service program, Medicare+Choice plans require referrals or prior authorization for certain services to minimize unnecessary utilization.
Medicare provides valuable and extensive health care coverage for beneficiaries. Nevertheless, significant gaps leave some beneficiaries vulnerable to sizeable financial burdens from out-of-pocket expenses. Medigap is a widely available source of supplemental coverage. This testimony discusses (1) beneficiaries' potential financial liability under Medicare's current benefit structure and cost-sharing requirements, (2) the cost of Medigap policies and the extent to which they provide additional coverage, and (3) concerns that Medigap's so-called "first dollar" coverage undermines the cost control incentives of Medicare's cost-sharing requirements. GAO found that Medicare's benefits package and cost-sharing requirements leave beneficiaries liable for high out-of-pocket costs. Medigap policies pay for some or all Medicare cost-sharing requirements but do not fully protect beneficiaries from potentially significant out-of-pocket costs such as prescription drug coverage. Medigap first-dollar coverage eliminates the ability of Medicare's cost-sharing requirements to promote prudent use of services.
In September 2011, we reported on DOD’s approach to examining itself for efficiencies, including the parameters used to guide the Secretary of Defense’s efficiency initiative. The initiative targeted both shorter and longer-term improvements in a wide range of areas across the department, including its organizational structure, business practices, and modernization programs, and instituted reductions to its personnel levels. As part of its fiscal year 2012 budget request, DOD projected savings of $178 billion to be realized over a 5-year period beginning in fiscal year 2012, as shown in table 1. Of the $178 billion in projected savings proposed by the department, $100 billion identified by the military departments and Special Operations Command was reinvested in high-priority needs and the remaining $78 billion was reduced from DOD’s budget from fiscal years 2012 through 2016. This reflects a 2.6 percent reduction from DOD’s fiscal year 2011 budget submission over the same period. Some of these savings and reductions were from headquarters-related resources, such as personnel and operating costs. DOD Instruction 5100.73 establishes a system to identify and manage the number and size of major DOD headquarters activities. As previously stated, the instruction defines major DOD headquarters activities as those headquarters whose primary mission is to manage or command the programs and operations of DOD and its components and their major military units, organizations, or agencies. Since the mid-1980s, Congress has enacted statutory limits on the number of major DOD headquarters activity personnel, to include those in the Office of the Secretary of Defense; the headquarters of the combatant commands; the Office of the Secretary of the Army and the Army Staff; the Office of the Secretary of the Air Force and the Air Staff; the Office of the Secretary of the Navy, the Office of the Chief of Naval Operations, and the Headquarters, Marine Corps; and the headquarters of the defense agencies and DOD field activities. In addition, Congress has enacted various reporting requirements related to major DOD headquarters activity personnel. As previously stated, our prior work has shown that DOD has encountered challenges both in identifying major DOD headquarters activity personnel and in reporting this information to Congress. DOD has taken some steps to examine its headquarters resources for efficiencies, but additional opportunities for cost savings may exist by further consolidating organizations and centralizing functions. For the purposes of the Secretary of Defense’s efficiency initiative, DOD components, including the military departments, were asked to focus in particular on headquarters and administrative functions, support activities, and other overhead in their portfolios. DOD’s fiscal year 2012 budget request, describing its planned efficiency initiatives for fiscal years 2012 to 2016, included several initiatives related to headquarters organizations or personnel. Two organizations, Joint Forces Command and the Business Transformation Agency, were disestablished and some of their functions were absorbed in other organizations. DOD estimated that closing these two organizations would save approximately $2.2 billion through fiscal year 2016. Table 2 provides other examples of headquarters-related efficiency initiatives DOD is implementing in the military departments and in other DOD components that we reviewed. See appendix II for a further description of these headquarters-related efficiency initiatives. In compiling and comparing the headquarters-related efficiency initiatives from across the department, we found that the approach that was taken and level of detail differed markedly across the various DOD components. For instance, some DOD components focused on specific organizations and provided detail about planned actions, while others promised significant reductions but provided only broad descriptions of what is planned to achieve them. For example, the Navy provided detailed information on the number of positions that will be eliminated and estimated cost savings for the Navy’s merger of U.S. Fleet Forces Command and U.S. 2nd Fleet staff. In contrast, the Army planned more than $1 billion in savings by streamlining its installation management services and programs but did not specify how this will be achieved. In reviewing these headquarters-related efficiency initiatives, however, we found that they generally fell into two categories: (1) consolidating or eliminating organizations based on geographic proximity or span of control, and (2) centralizing overlapping functions and services (see figs. 2 and 3). The DOD efficiencies that GAO reviewed to reduce headquarters resources are expected by DOD to save about $2.9 billion through fiscal year 2016, less than 2 percent of the $178 billion in savings DOD projected departmentwide. Our work indicates that DOD may be able to find additional efficiencies by further examining opportunities to consolidate organizations or centralize functions at headquarters. In its January 2012 strategic guidance, DOD recognized that it would need to find further efficiencies in headquarters and other overhead in order to meet the demands of the new strategy. To achieve these efficiencies, DOD could consider a number of different options, including reducing organizational layers, consolidating administrative offices, and simplifying management processes. However, the department does not have a definition of what constitutes overhead or standards for assessing headquarters resources. Given the size and complexity of the department, setting common standards would be difficult. Nonetheless, DOD officials we spoke with recognized that additional efficiencies could be achieved by further examination of headquarters resources. The following are examples of areas that officials said DOD was considering for potential efficiencies. According to Navy officials, the Deputy Under Secretary of the Navy is having ongoing discussions with Navy components and conducting analysis to identify potential future efficiencies, such as consolidating or streamlining facilities management services and functions provided at various Navy installations. Officials commented that these issues are complicated and that as of December 2011, the estimates of the savings had not been determined. The Army is, among other things, implementing and integrating previous efforts approved by the Secretary of Army, such as planning to optimize materiel development and sustainment by eliminating overlapping or redundant responsibilities between the Army’s program executive offices and the Army Materiel Command. The Army expects this effort to include reductions in personnel for an estimated annual savings of $3 billion by the end of fiscal year 2015. The Air Force is currently examining opportunities to provide better command and control over air and space operations centers. As of December 2011, Air Force officials could not provide further details regarding this effort because decisions were still pending. Defense Finance and Accounting Service officials are examining travel and supplies, postage and printing, and other areas to identify additional savings, which it estimates at $63 million by fiscal year 2017. DOD may not have identified all areas where reductions in headquarters personnel and operating costs could be achieved because, according to DOD officials, the department was working quickly to identify savings in the fiscal year 2012 budget. To accomplish this quickly, DOD used a top- down approach that identified several targets of opportunity to reduce costs, including headquarters organizations, but left limited time for a detailed data-driven analysis. In February 2012, in DOD’s fiscal year 2013 budget request, the department proposed an additional $61 billion in savings from fiscal years 2013 to 2017 through reductions in overhead and support requirements, and improved business practices. However, it provided limited information as to what portions of these savings were specific to headquarters and how they would be achieved. Without systematic efforts to reexamine its headquarters resources on a more comprehensive basis, DOD may miss opportunities to shift resources away from overhead. DOD does not have complete and reliable major DOD headquarters activity data available for use in making efficiency assessments and decisions because the department continues to have challenges in identifying and tracking personnel and other resources devoted to headquarters. According to our internal control standards, an agency must have relevant, reliable, and timely information in order to run and control its operations. In reviewing DOD’s guiding instruction we found that it does not identify all current major DOD headquarters activity organizations or address the tracking of contractors that perform headquarters functions. DOD officials stated that they have delayed updating the instruction to allow time for components to adjust to the statutory changes enacted by Congress in 2009 that created new reporting requirements for major DOD headquarters activity personnel. According to DOD officials, the ever-changing statutory reporting requirements have contributed to DOD’s failure to report to Congress about the numbers of headquarters personnel. As the department did not have reliable major DOD headquarters activity data, DOD gathered information from multiple sources to compile headquarters-related information for the Secretary of Defense’s 2010 efficiency initiative. Some of the information DOD compiled to identify headquarters-related efficiency initiatives was inaccurate, and as a result, some adjustments will need to be made during implementation to achieve planned savings. Without a proper accounting of headquarters personnel and operating costs, to include contractors, DOD will not have complete and reliable information on the universe of headquarters resources. Complete and reliable headquarters information will be even more important in supporting an examination of DOD resources in light of changes in DOD’s strategic priorities for the next decade. According to our internal control standards, an agency must have relevant, reliable, and timely information in order to run and control its operations. This information is required to develop external reporting and to make operating decisions, monitor performance, and allocate resources. Moreover, we have reported that accurate, timely, and useful financial management information is essential for sound management analysis, decision making, and reporting within DOD. DOD Instruction 5100.73, Major DOD Headquarters Activities, establishes a system to identify and manage the number and size of major DOD headquarters activities. The Director of Administration and Management, within the Office of the Secretary of Defense, is responsible for issuing guidance, as required, and maintaining the list of major DOD headquarters activity organizations. However, significant revisions to the instruction have not been made since 2007 and the instruction does not identify all current major DOD headquarters activity organizations. For example, Navy officials noted several Marine Corps components, which are parallel to Navy components in the major DOD headquarters activity functions they perform, are not included in the instruction. Also, the instruction does not reflect the component command headquarters of the Departments of Navy and Air Force at U.S. Africa Command, which were established in October 2008 and October 2009, respectively, and would likely be considered major DOD headquarters activities. Additionally, the instruction does not explicitly address how and to what extent the thousands of contractors who work at headquarters around DOD should be included as part of its major DOD headquarters activity data. DOD has increasingly relied on contractors to provide a range of services at headquarters, such as management and administrative support, information technology, and base operations support. Some of the services and functions performed by contractors could be considered part of major DOD headquarters activities. Our work over the past decade on DOD’s contracting activities has noted the need for DOD to obtain better data on its contracted services and personnel to enable it to make more informed management decisions, ensure departmentwide goals and objectives are achieved, and have the resources to achieve desired outcomes, which could include reducing overhead. In January 2011, we reported that further action was needed by DOD to better implement its requirements for conducting an inventory of its service contractor activities and made two recommendations, including that DOD develop a plan of action to collect manpower data from contractors. In response to GAO’s report, DOD has outlined its approach for collecting these data, but does not anticipate complete reporting until 2016. National Defense Authorization Act for Fiscal Year 2010, Pub. L. No. 111-84, §1109 (2009), codified at 10 U.S.C. §115a. The Defense Manpower Requirements Report is an annual report to Congress that provides DOD’s manpower requirements, to include those for military personnel and civilians, as reflected in the President’s budget request for the current fiscal year. must report the amount of any adjustment in personnel limits made by the Secretary of Defense or the secretary of a military department and, for each adjustment made pursuant to section 1111(b)(2) of the fiscal year 2009 National Defense Authorization Act, the purpose of the adjustment. DOD officials are aware of the reporting requirements and expect to report some major DOD headquarters activity data to Congress in the fiscal year 2012 Defense Manpower Requirements Report; however, it is unclear what information will be included in the report. As the department did not have reliable major DOD headquarters activity data, DOD gathered information from multiple sources to compile headquarters-related information for the Secretary of Defense’s 2010 efficiency initiative. The military departments used existing budget review processes to identify potential efficiency initiatives for fiscal years 2012 to 2016, while the Secretary of Defense established a temporary task force, chaired by his Chief of Staff, to identify specific areas in which immediate action could be taken departmentwide, such as holding the civilian workforce at fiscal year 2010 levels. Because of the short timelines given to identify efficiencies and limitations on the sharing of information imposed on personnel by DOD to prevent disclosure of the decisions, this information was not validated with the DOD officials responsible for implementing the decisions to ensure that it was accurate. As a result, some information used to identify headquarters-related efficiency initiatives was inaccurate and some adjustments in resource allocations will have to be made during implementation to achieve planned savings. Some of the implementation challenges that resulted from inaccurate information were significant, involving hundreds of millions of dollars. The most prominent example we found was an Air Force efficiency initiative to consolidate installation support services, such as environmental quality and civil engineering services, real property programs and services, vehicle and fuel management, operational contracting, security forces, and some family services, at field operating agencies and Air Force headquarters. When initially developed in July 2010 as part of its preparations for the fiscal year 2012 budget, the initiative was estimated to save $685 million by eliminating 1,371 positions by fiscal year 2016. However, according to an Air Force official, the initial savings estimate was developed at senior levels on an extremely short time line and proved overly optimistic. According to Air Force officials, in December 2010, after further analysis by the Air Force staff was completed, the estimate was revised to a savings of $148.1 million by eliminating 354 positions by fiscal year 2016. Air Force officials told us that they now have to reduce operating costs or personnel from other functional areas to make up the $537 million difference in savings and the 1,017 difference in personnel reductions estimated as part of DOD’s fiscal year 2012 budget. In other examples, we found DOD components had overestimated the number of personnel or incorrectly identified the amount of contractor- related resources at affected organizations, potentially affecting estimated savings. With the long-term fiscal challenges facing the nation, additional efforts to find cost savings at DOD will likely be necessary. As DOD considers its future resources and the key military capabilities it will need to meet its new strategic priorities, the department will need to consider further efficiencies in overhead, such as personnel and operating costs at DOD headquarters. While DOD has taken some steps to trim its headquarters, these initial efforts were uneven across the department and modest in contrast to the defense budget. The savings DOD projected over 5 years from the headquarters reductions taken to date represent a small fraction of the defense budget over the same period. Additional headquarters- related efficiencies may be identified by further examining opportunities to consolidate organizations or centralize functions. To ensure that appropriate levels of resources are applied to overhead, it is critical for DOD to have complete and reliable information to use to inform its decision-making and prioritize its resources. Without updating its guiding instruction to ensure that it has complete and reliable data on headquarters personnel and operating costs, DOD will not have the information it needs, which could affect its efforts to direct resources toward its main priorities. We recommend that the Secretary of Defense take the following two actions. To further DOD’s efforts to reduce overhead-related costs in light of the recent changes in DOD’s strategic priorities, we recommend that the Secretary of Defense direct the secretaries of the military departments and the heads of the DOD components to continue to examine opportunities to consolidate or eliminate military commands that are geographically close or have similar missions, and to seek further opportunities to centralize administrative and command support services, functions, or programs. To improve DOD’s ability to identify how many headquarters personnel it has, including military, civilian and contractor personnel, and improve the information Congress and DOD need to ensure that headquarters organizations are appropriately sized and overhead positions are reduced to the extent possible, we recommend that the Secretary of Defense direct the Director of Administration and Management, in consultation with the Under Secretary of Defense for Personnel and Readiness, to revise DOD Instruction 5100.73, Major DOD Headquarters Activities, to include all major DOD headquarters activity organizations, specify how contractors performing major DOD headquarters activity functions will be identified and included in headquarters reporting, clarify how components are to compile the major DOD headquarters activities information needed to respond to the reporting requirements in section 1109 of the fiscal year 2010 National Defense Authorization Act, and establish time frames for implementing the actions above to improve tracking and reporting headquarters resources. In written comments on a draft of this report, DOD concurred with our first recommendation and partially concurred with our second recommendation. DOD’s comments are reprinted in their entirety in appendix IV. DOD also provided technical comments, which we incorporated into the report as appropriate. DOD fully concurred with our recommendation that the Secretary of Defense direct the secretaries of the military departments and the heads of the DOD components to continue to examine opportunities to consolidate or eliminate military commands that are geographically close or have similar missions, and to seek further opportunities to centralize administrative and command support services, functions, or programs. In its comments, DOD stated that it would continue to assess its organizational structure and personnel to optimize output and eliminate inefficiencies. DOD partially agreed with our second recommendation that the Secretary of Defense direct the Director of Administration and Management, in consultation with the Under Secretary of Defense for Personnel and Readiness, to revise DOD Instruction 5100.73, Major DOD Headquarters Activities, to (1) include all major DOD headquarters activity organizations, (2) specify how contractors performing major DOD headquarters activity functions will be identified and included in headquarters reporting, (3) clarify how components are to compile the major DOD headquarters activities information needed to respond to the reporting requirements in section 1109 of the fiscal year 2010 National Defense Authorization Act, and (4) establish time frames for implementing the actions above to improve tracking and reporting of headquarters resources. In its written comments, DOD stated that it concurs with the intent of this recommendation and supports the refinement and update of DOD Instruction 5100.73, Major DOD Headquarters Activities. It then separately addressed three elements of our recommendation—including all major DOD headquarters activity organizations, reporting on contractors performing major DOD headquarters activities, and clarifying how components are to compile these data to respond to reporting requirements. With regard to including all major DOD headquarters activity organizations in the instruction, DOD stated that the department uses the designation of major DOD headquarters activities in DOD Instruction 5100.73 to identify and manage the size of organizations in order to comply with statutory limitations, not as a tool to manage the organizational efficiency of the department or its components. It further stated that shortcomings in the instruction have limited impact on the management of the department. As we noted in our report, the purpose of the instruction is to establish a system to identify and manage the number and size of major DOD headquarters activities, and the guidance does address statutory limitations. However, the instruction certainly has implications for the management of the department that extend beyond the need to comply with relevant statutory limits. For example, the instruction directs the department to take certain steps, including maintaining an approved list of major DOD headquarters activities, in order to provide a framework for implementing the DOD policy that major DOD headquarters activities shall be organized and staffed in a manner that permits the effective accomplishment of assigned responsibilities with a minimum number of personnel. Additionally, the department expressed concerns about revising the definition of major DOD headquarters activities in DOD Instruction 5100.73 because there are references to that definition in statute. However, we did not recommend that the department revise the definition. As noted by the department, section 194 of Title 10 of the United States Code sets out limitations on military and civilian personnel involved in management headquarters activities or management headquarters support activities of the defense agencies and the DOD field activities. The statute specifies that the terms “management headquarters activities” and “management headquarters support activities” are to be defined as those terms were defined in the January 7, 1985, version of DOD Directive 5100.73. Our recommendation is not aimed at revisions to the definition; rather, as explained in our report, the recommendation is based on the fact that the list of major DOD headquarters activities found in enclosure 4 of the instruction is outdated. As such, we disagree with the assertion that updating the guidance consistent with our recommendations would in any way threaten the “foundational basis” prescribed by Title 10 or require statutory relief. Furthermore, we note that in addition to the administrative change from directive to instruction in 2009 mentioned by the department, other revisions have been made to the guidance since 1985, including changes made in 1999 revising the way the definitions of management headquarters activities, management headquarters support activities, and other terminology are presented. With regard to specifying how contractors performing major DOD headquarters activity functions will be identified and included in headquarters reporting, DOD stated that it submitted a plan to the congressional defense committees in November 2011 for its Inventory of Contracts for Services that established both near-term and long-term actions needed to improve overall visibility and accountability of all contracted services, including those performed in support of major DOD headquarters activities. This plan and subsequent guidance issued in December 2011 describe the steps being taken to account for the level of effort of contracted support, based on the activity requiring the service. DOD also noted that aligning contract support with the requiring activity, as opposed to contracting activity, will ensure that the department can reflect contractor full-time equivalents, based on direct labor hours collected from contractors, supporting major DOD headquarters activities. While we support DOD efforts to improve visibility and accountability of contracted services, particularly those supporting major DOD headquarters activities, as noted in our report, DOD does not anticipate complete reporting on contractor manpower data until 2016. We continue to believe that DOD should make it a priority to obtain better data on its contracted services and personnel to enable it to make more informed management decisions, ensure departmentwide goals and objectives are achieved, and have the resources to achieve desired outcomes, which could include reducing overhead. With regard to clarifying how components are to compile information needed to respond to the reporting requirements for major DOD headquarters activity of Section 1109 of the fiscal year 2010 National Defense Authorization Act, DOD stated that it has incorporated this requirement into the Defense Manpower Requirements Report. The department stated that the DOD components reported aggregate civilian and military data for inclusion in the fiscal year 2012 Defense Manpower Requirements Report that will be included in the fiscal year 2013 report as well. The department also stated that a more accurate reflection of major DOD headquarters activity data is being incorporated into the annual Inherently Governmental and Commercial Activities Inventory. It further noted that the inventory guidance issued in October 2011 included the major DOD headquarters activity requirement and the fiscal year 2012 inventory will include these data. In its written comments, DOD stated that this revision will provide greater analytic capability for DOD function codes, manpower mix criteria, location of services, and specific unit/organization of billets designated as major DOD headquarters activities. Again, we support DOD’s efforts to include major DOD headquarters activity data in the Inherently Governmental and Commercial Activities Inventory, but note that DOD did not provide a time frame for when the fiscal year 2012 inventory would be issued. Further, while DOD noted that it will include aggregate civilian and military data in the fiscal year 2012 and fiscal year 2013 Defense Manpower Requirements Report, neither of these reports has been issued, and we are therefore unable to determine whether the data were included. Despite DOD’s concerns, we continue to believe that it is important for DOD to take actions to revise the instruction to include all major DOD headquarters activity organizations, specify how contractors will be identified and included in headquarters reporting, and clarify how components are to report this information as well as establish time frames for implementing these actions to improve tracking and reporting of headquarters resources. We are sending copies of this report to interested congressional committees, the Secretary of Defense, the Under Secretary of Defense for Personnel and Readiness, the Director of Administration and Management, the Deputy Chief Management Officer, and the secretaries of the military departments. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (404) 679-1816 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. We conducted this work in response to a statutory mandate that directed us to conduct routine investigations to identify federal programs, agencies, offices, and initiatives with duplicative goals and activities within departments and governmentwide. This report evaluated the extent to which the Department of Defense (DOD) (1) examined its headquarters resources for efficiencies and (2) has complete and reliable headquarters information available for use in making efficiency decisions. To conduct this work, we selected and assessed DOD efficiency initiatives related to headquarters based on our analysis of information included with DOD’s fiscal year 2012 budget request and the Secretary of Defense’s Track Four Efficiency Initiatives Decisions memo. We used the Department of Defense Efficiency Initiatives Fiscal Year 2012 Budget Estimates justification book to select two efficiency initiatives affecting each of the military departments based on their relevancy to headquarters. Using this sample of headquarters-related efficiency initiatives, we chose to interview components responsible for implementing the selected efficiency initiatives based on the amount of savings they are responsible for achieving. We used the Secretary of Defense’s Track Four Efficiency Initiatives Decisions memo to select two combatant commands and one organization from each of the following: the Office of Secretary of Defense, the defense agencies, and the DOD field activities. We selected organizations rather than specific efficiency initiatives because their estimated personnel and cost savings reflected several DOD efficiency initiatives, including the defense agency, Office of Secretary of Defense, and combatant command baseline review and the service support contracts reduction. We selected the organizations based on the amount of estimated personnel cuts and savings they were responsible for achieving. The efficiency initiatives and organizations we selected are further discussed in appendix II. To assess the extent to which DOD examined its headquarters resources for efficiencies, we obtained and analyzed documentary and testimonial evidence on selected headquarters-related efficiency initiatives announced by DOD, including the analysis conducted to identify headquarters-related resources and the approach taken to develop headquarters-related efficiency initiatives. To assess the extent to which DOD has complete and reliable headquarters information available for use in making efficiency decisions, we obtained and analyzed documentary and testimonial evidence from DOD components detailing the policies and procedures, as well as roles and responsibilities, for tracking and reporting headquarters personnel and operating costs, such as DOD Instruction 5100.73, Major DOD Headquarters Activities. We also obtained and analyzed documentary and testimonial evidence on the processes and data DOD components used to identify their headquarters- related resources when developing selected headquarters-related efficiencies. In addition to conducting interviews with the components responsible for executing selected efficiency initiatives, we collected documentary and testimonial evidence from the military department’s deputy chief management offices, financial management and budget offices, and other DOD components that were involved in developing the efficiency initiatives directed by the Secretary of Defense and included as part of DOD’s fiscal year 2012 budget request. We interviewed officials, and where appropriate obtained documentation, at the organizations listed below: Office of Secretary of Defense Office of the Director of Cost Assessment and Program Evaluation Office of the Under Secretary of Defense for Policy Office of the Director of Administration and Management Office of the Under Secretary of Defense for Personnel and Office of the Under Secretary of Defense (Comptroller) Manpower and Personnel Division U.S. European Command U.S. Northern Command Defense Finance and Accounting Services Department of the Air Force Office of the Under Secretary of the Air Force, Deputy Chief Office of the Deputy Chief of Staff for Logistics, Installations and Office of the Deputy Chief of Staff for Manpower, Personnel and Services, Directorate of Manpower, Organization and Resources U.S. Air Forces in Europe Air Combat Command Air Education and Training Command First Air Force (Air Forces North) Third Air Force (Air Forces Europe) Air Force Real Property Agency Air Force Services Agency Air Force Center for Engineering and the Environment Office of Deputy Under Secretary of the Army Office of the Assistant Secretary of the Army (Financial Management and Comptroller), Office of the Director, Army Budget Office of the Assistant Chief of Staff for Installation Management U.S. Army Installation Management Command, Headquarters U.S. Army Installation Management Command, Atlantic Region Office of the Deputy Chief of Staff for Programs, Directorate of Program, Analysis, and Evaluation Office of the Deputy Chief of Staff for Personnel, Directorate of Plans Office of the Assistant Secretary of the Army for Manpower and U.S. Army Europe U.S. Army North Office of the Deputy Under Secretary of the Navy, Deputy Chief Management Officer Office of the Deputy Chief of Naval Operations (Manpower, Personnel, Education and Training) Office of the Deputy Chief of Naval Operations (Integration of Capabilities and Resources) Office of the Assistant Secretary of the Navy (Financial Management and Comptroller), Office of Budget Headquarters, Marine Corps U.S. Marine Forces Europe U.S. Naval Forces Europe U.S. Fleet Forces Command Naval Air Systems Command Navy Reserve Force Command U.S. Pacific Fleet We conducted this performance audit from September 2010 to March 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. For this review, we selected and assessed headquarters-related efficiency initiatives specific to the military departments as well as organizations affected by DOD-wide efficiency initiatives, discussed in detail below. The efficiency initiatives we reviewed did not include all the headquarters-related efficiency initiatives DOD has announced. We chose to review the efficiency initiatives based on the organizations affected as well as the estimated number of personnel and the amount of cost savings involved. As part of the Secretary of Defense’s efficiency initiative, the military departments and Special Operations Command were instructed to find at least $100 billion in savings from fiscal years 2012 to 2016 that could be reinvested in force structure and modernization efforts, starting with the fiscal year 2012 budget. Some of these initiatives included reductions to headquarters personnel and operating costs, as shown below. Under this Navy initiative, U.S. 2nd Fleet was disestablished and its staff was merged into U.S. Fleet Forces Command. Prior to this merger, U.S. 2nd Fleet was responsible for training, certifying, and providing maritime forces to respond to global contingencies while U.S. Fleet Forces Command served to provide operational and planning support to the combatant commanders and worked with U.S. Pacific Fleet to organize, man, train, maintain, and equip Navy forces. The Navy found that the missions of the two organizations had converged over time, and decided that an integrated staff could better adapt to changing missions than two separate staffs and the merger could eliminate redundant personnel. As a result of the merger, U.S. Fleet Forces Command now assumes both its previous responsibilities as well as U.S. 2nd Fleet’s former missions. The efficiency initiative eliminated one Navy flag officer at the rank of vice admiral, 160 active component positions, and 184 reserve component positions. The consolidation resulted in estimated savings of $10.5 million in fiscal year 2012, with expected cumulative savings of $100.8 million by fiscal year 2016. The consolidation began in May 2011 and was functionally completed on September 30, 2011. Under this Navy initiative, shore military positions at both U.S. Pacific Fleet and U.S. Fleet Forces Command were eliminated and personnel associated with these positions were redirected to higher-priority missions, including filling personnel shortages of operational ships at sea. Navy officials stated that new capabilities and systems on operational ships, such as ballistic missile defense, have led to increased manpower requirements at sea. Additionally, more effective training has decreased shore manpower needs, freeing up manpower for operational ships at sea. The associated funding for the reduced shore military positions, $88.3 million, has been removed from the budget for fiscal year 2012. From fiscal years 2012 to 2016, the expected cumulative savings is $858.1 million. This is an Air Force initiative to consolidate installation support services at Air Force headquarters and field operating agencies, which are Air Force components that perform specialized activities in support of Air Force- wide missions. To achieve the estimated personnel and cost savings, the Air Force is consolidating environmental quality and civil engineering services, real property programs and services, vehicle and fuel management, operational contracting, security forces, and some family services by shifting positions from major command staffs that provide these services to field operating agencies or Air Force headquarters and eliminating others. Planning for the implementation of this initiative is still underway and implementation will be phased from fiscal years 2012 to 2016. When initially developed in July 2010 as part of the Air Force’s preparations for the fiscal year 2012 budget, the initiative was estimated to save $685 million and eliminate 1,371 positions by fiscal year 2016. However in December 2010, after further analysis was completed, the estimate was revised to eliminate 354 positions by fiscal year 2016 along with a savings of $2.4 million in fiscal year 2012 and $148.1 million by fiscal year 2016. The Air Force may now have to reduce operating costs or personnel from other functional areas to make up the $537 million difference in savings estimated as part of DOD’s fiscal year 2012 budget. This is an Air Force initiative to consolidate air and space operations centers in Europe and in the continental U.S. and to inactivate numbered air forces. Numbered air forces provide operational leadership to subordinate units, such as wings, or are designated as component numbered air forces that perform operational and warfighting missions for U.S. combatant commanders. Air and space operations centers provide command and control of Air Force operations and coordinate with other components and military services. The Air Force consolidated the 617th Air and Space Operations Center, which supports U.S. Africa Command, with the 603rd Air and Space Operations Center, which supports U.S. European Command, resulting in the elimination of 55 positions and one headquarters organization, resulting in a savings of $4.2 million in fiscal year 2012 and a cumulative savings of $37.8 million from fiscal years 2012 to 2016. The consolidation was completed on October 1, 2011. Air Force officials stated that the transition has gone smoothly because personnel in these organizations had practiced being integrated while executing military operations in Libya as part of Operation Odyssey Dawn. The merged organizations will now provide operational and command and control support to both combatant commands. The Air Force planned to consolidate the 601st and 612th Air and Space Operations Centers, supporting U.S. Northern Command and U.S. Southern Command, respectively; however, this was formally halted on August 30, 2011, by Air Force officials in favor of developing an Air Force- wide solution to provide more effective operational command and control. As of December 2011, Air Force officials could not provide further details regarding this solution because decisions were still pending. The Third Air Force (Air Forces Europe) and the 17th Air Force (Air Forces Africa) are also being consolidated with the headquarters of U.S. Air Forces in Europe, thereby eliminating one headquarters organization and 183 positions for a cumulative savings of $95.1 million from fiscal years 2012 to 2016. This effort is estimated to be completed by May 2012. The 19th Air Force, which supports Air Education and Training Command, will be consolidated with the command’s headquarters, thereby eliminating 18 positions and saving $0.6 million in fiscal year 2012 with cumulative savings of $10.8 million by fiscal year 2016. This initiative is to be completed by June 2012. Although Air Education and Training Command has identified 18 positions to be eliminated, this effort was initially designed to eliminate 40 positions. Air Education and Training Command has informed Air Force leadership that some of these personnel were performing safety- and compliance-related inspections and could not be eliminated; therefore Air Force leadership is considering adjusting the number of positions that may be removed. This is an Army initiative to reduce, eliminate and re-scope services and programs across the Army’s installations, with an estimated cumulative savings of $1.1 billion ($456 million in fiscal year 2015 and $667 million in fiscal year 2016). Services and programs to be reviewed include human resources, information technology, logistics, public works, and security, and other services provided on Army installations. On August 29, 2011, the Army established the Installation Management Reform Task Force— which includes representatives of the Army commands and organizations, such as the Army Installation Management Command—to assist in streamlining installation management and reducing overhead costs, among other things. Specifically, the representatives of the task force are responsible for conducting a detailed analysis of the services provided on Army installations. In September 2011, the Army began its holistic review of installation services and infrastructure costs to evaluate opportunities to develop efficiencies, among other things. Army commands, such as the Installation Management Command, were directed to seek ways to reduce shared contracted services or eliminate services and programs perceived to be of little value to reduce costs. They are also looking at the effects of reduced population (both military and civilians) on the demands for installation services. According to officials at the Office of Assistant Chief of Staff for Installation Management, the office leading this efficiency initiative, they are in the early stages of identifying the services and programs to be reduced, eliminated, or re-scoped, and the effort is scheduled to be executed in fiscal year 2015 and fiscal year 2016. As part of DOD’s efficiency initiative, the Secretary of Defense directed a series of initiatives designed to reduce duplication, overhead, and excess across the department. For example, the Secretary directed components of the Office of the Secretary of Defense, the Joint Staff, combatant commands, the defense agencies, and DOD field activities to conduct baseline reviews of how they use personnel and budgetary resources to carry out their missions in order to rebalance resources. This and other departmentwide efforts were projected to yield about $78 billion in savings through fiscal year 2016. The efficiencies for the components discussed below originate from both the baseline reviews and other departmentwide efficiency initiatives. The Office of the Under Secretary of Defense for Policy is a component of the Office of the Secretary of Defense that advises the Secretary of Defense on the formulation of national security and defense policy. To identify efficiencies, the Office of the Under Secretary of Defense for Policy conducted a study that found its ratio of administrative support to senior executives was 3 to 1, which was above the industry standard; it therefore determined that it could make reductions in administrative overhead. The Office of the Under Secretary of Defense for Policy cut 68 technical support contractors and 42 administrative support contractors for an estimated savings of $14.6 million in fiscal year 2012, and expected cumulative savings of $77.7 million from fiscal years 2012 to 2016. Officials with the Office of the Under Secretary of Defense for Policy stated that the component is on target to meet all directed initiatives. The Defense Finance and Accounting Service is a defense agency that provides finance and accounting services for the DOD civilians and military members. It is enacting several efficiencies and plans to eliminate 227 contractor positions and six civilian positions. Additionally, it is planning to eliminate paper leave and earning statements that it provides to DOD personnel and reduce manual processing of transactions by increasing electronic commerce to pay for contractor mission support. The associated savings for these initiatives, $41.3 million, has been removed from the budget for fiscal year 2012. From fiscal years 2012 to 2016, the expected cumulative savings is $206.5 million. Although the associated funding of $41.3 million has been removed for fiscal year 2012, officials said that 100 percent elimination of paper leave and earning statements and increase in e-commerce transactions depend on the demands of the agency’s customers. The Washington Headquarters Services is a field activity organization that supplies administrative support services across the department, such as information technology, facilities management, and human resources. According to Washington Headquarters Service officials, they identified the efficiencies by focusing on critical services and devolving noncritical and completed missions. The Washington Headquarters Services reduced the number of organizational elements it has from 12 to 8 by merging directorates that performed similar services and functions. It combined its Information Technology Management and Office of the Secretary of Defense Networks directorates to form the Enterprise Information Technology Services directorate. The former directorates of Defense Facilities and Pentagon Renovation were combined to form Facilities Services, while three directorates that performed similar administrative functions were consolidated to form the Enterprise Management directorate. Through these efforts, Washington Headquarters Services will eliminate a total of 52 civilian positions and generate an estimated savings of $7.2 million for fiscal year 2012 and an expected cumulative savings of $57 million from fiscal years 2012 to 2016. Officials said the associated funding has been removed for fiscal year 2012 and, as of January 2012, 50 of the 52 civilian positions have been removed. U.S. Northern Command is a unified combatant command whose mission is to conduct homeland defense, civil support, and security cooperation. U.S. Northern Command is implementing several efficiencies, to include eliminating lower-priority functions, consolidating U.S. Northern Command’s and North American Aerospace Defense Command’s staff functions, eliminating 13 additional support billets, and reducing reliance on service support contractors. According to officials, these actions have resulted in an estimated savings of $12.6 million for fiscal year 2012 and expected cumulative savings of $87.8 million from fiscal years 2012 to 2016. To achieve the efficiencies, U.S. Northern Command reviewed low- priority tasks and eliminated manpower and other associated costs such as supplies and computer support. Officials said civilian positions have been eliminated and phased contract reductions will be complete by September 2013. U.S. European Command is a unified combatant command whose mission is to conduct military operations, international military engagement, and interagency partnering to enhance U.S. and transatlantic security. U.S. European Command has implemented several efficiencies to achieve savings. The command reorganized its headquarters to promote interagency cooperation by realigning staff and reduced headquarters manpower and expenditures by 10 percent by realigning resources to higher-priority missions. These actions were estimated to eliminate 86 military and civilian positions and save $17 million in fiscal year 2012, with expected cumulative savings of $84.8 million by fiscal year 2016. According to officials, U.S. European Command has already completed the reorganization of its headquarters, and the funding for the eliminated positions has been removed from the fiscal year 2012 budget. Figures 4 through 9 contain the information on DOD headquarters organizations presented in noninteractive format. In addition to the contact named above, Patricia Lentini, Assistant Director; Erin Behrmann; Pat Bohan; Grace Coleman; Richard Geiger; Jeffrey Hubbard; Cynthia Saunders; John Van Schaik; Angela Watson; K. Nicole Willems and Weifei Zheng made key contributions to this report.
The Department of Defense’s (DOD) headquarters and support organizations have grown since 2001, including increases in spending, staff, and numbers of senior executives and the proliferation of management layers. In 2010, the Secretary of Defense directed DOD to undertake a departmentwide initiative to reduce excess overhead costs. In response to a mandate, GAO evaluated the extent to which DOD (1) examined its headquarters resources for efficiencies and (2) has complete and reliable headquarters information available for use in making efficiency decisions. For this review, GAO analyzed documents and interviewed officials regarding DOD’s headquarters resources and information. The Department of Defense (DOD) has taken some steps to examine its headquarters resources for efficiencies, but additional opportunities for cost savings may exist by further consolidating organizations and centralizing functions. For purposes of the Secretary of Defense’s efficiency initiative, DOD components were asked to focus in particular on headquarters and administrative functions, support activities, and other overhead in their portfolios. DOD’s fiscal year 2012 budget request included several efficiencies related to headquarters organizations or personnel. GAO found that these efficiencies generally fell into two categories: (1) consolidating or eliminating organizations based on geographic proximity or span of control and (2) centralizing overlapping functions and services. The DOD efficiencies that GAO reviewed to reduce headquarters resources are expected by DOD to save about $2.9 billion through fiscal year 2016, less than 2 percent of the $178 billion in savings DOD projected departmentwide. GAO’s work indicates that DOD may be able to find additional efficiencies by further examining opportunities to consolidate organizations or centralize functions at headquarters. DOD may not have identified all areas where reductions in headquarters personnel and operating costs could be achieved because the department was working quickly to identify savings in the fiscal year 2012 budget and used a top-down approach that identified several targets of opportunity to reduce costs, including headquarters organizations, but left limited time for a detailed data-driven analysis. In February 2012, DOD proposed $61 billion in additional savings over fiscal years 2013 to 2017, but provided limited information as to what portions of these savings were specific to headquarters. Without systematic efforts to reexamine its headquarters resources on a more comprehensive basis, DOD may miss opportunities to shift resources away from overhead. An underlying challenge facing DOD is that it does not have complete and reliable headquarters information available for use in making efficiency assessments and decisions. According to GAO’s internal control standards, an agency must have relevant, reliable, and timely information in order to run and control its operations. DOD Instruction 5100.73 guides the identification and reporting of headquarters information. However, GAO found that this instruction is outdated and does not identify all headquarters organizations, such as component command headquarters at U.S. Africa Command and certain Marine Corps headquarters. Also, although some of the services and functions performed by contractors could be considered as headquarters activities, the instruction does not address the tracking of contractors that perform these functions. DOD has delayed updating the instruction to allow time for components to adjust to the statutory changes enacted by Congress in 2009 that created new headquarters reporting requirements. According to DOD officials, ever-changing statutory reporting requirements have contributed to DOD’s failure to report to Congress about the numbers of headquarters personnel. As the department did not have reliable headquarters data, DOD compiled related information from other sources to inform its 2010 efficiency initiative. Because of the short timelines given to identify efficiencies and limitations on the sharing of information, this information was not validated before decisions were made. As a result, some of the information used to identify headquarters-related efficiencies was inaccurate and some adjustments in resource allocations will have to be made during implementation to achieve planned savings. Looking to the future, until DOD has updated its instruction to ensure that it has complete and reliable headquarters data, the department will not have the information it needs, which could affect its efforts to direct resources to its main priorities during future budget deliberations. GAO recommends that DOD continue to examine opportunities to consolidate organizations and centralize functions and services and revise DOD Instruction 5100.73 to include all headquarters organizations, specify how contractors performing headquarters functions will be identified and included in reporting, clarify how components are to compile information needed to respond to headquarters reporting requirements, and establish time frames for implementing these actions. DOD concurred with GAO’s first recommendation and partially concurred with GAO’s second recommendation.
Similar to the federal government’s acknowledgment over the past decade of the need to adopt a more businesslike approach to financial, information technology, and performance-based management, the need for strategic management of human capital is meeting increased recognition. Since we placed strategic human capital management on our high-risk list in 2001, the President’s Management Agenda (PMA) subsequently identified human capital as one of the five key governmentwide management challenges facing the federal government. The agenda specifically sets an expectation for agencies to integrate their human capital strategies with their organizational missions, visions, core values, goals, and objectives. In October 2002, the Office of Management and Budget (OMB) and OPM approved revised standards for success in the human capital area of the PMA, reflecting language that was developed in collaboration with GAO. To assist agencies in responding to the revised PMA standards, OPM released the Human Capital Assessment and Accountability Framework. Our work and that of others has shown that high-performing organizations link their human capital management systems—from the organizational level down to individual employees—with their strategic planning and mission accomplishment. This means the function that has traditionally been called personnel or human resources needs to make a fundamental transformation, from being a strictly support function involved in managing personnel processes and ensuring compliance with rules and regulations to designing and implementing human capital approaches to attain the agency’s strategic goals. In addition, we found that effective human capital professionals must have the appropriate preparation not just to provide effective support services, but also to effectively consult with line managers in tailoring human capital strategies to the unique needs of the agency. In March 2002, we released our Model of Strategic Human Capital Management to help agency leaders more effectively lead and manage their people and integrate human capital approaches into their strategic planning and decision making. The model emphasizes that successful strategic human capital management requires the integration of human capital approaches with strategies for accomplishing organizational missions and program goals. Such integration allows the agency to ensure that its core processes efficiently and effectively support mission-related outcomes. The executive branch agencies we reviewed took a range of actions as part of efforts to integrate human capital approaches with strategies for achieving organizational missions. Top agency leaders and human capital leaders were the primary initiators of the various actions taken. In addition, the agency leaders and human capital leaders jointly have employed human capital professionals and agency line managers to share the accountability for successfully integrating human capital approaches into the planning and decision making of the agencies. Top leadership in the agencies expected agency human capital leaders to significantly contribute to strategic planning and decision making, evidenced by their establishing human capital roles in positions that are significant in the organizational hierarchy. This acknowledges both the commitment of the agency head to strategically managing the agency’s people and the expected role that human capital leaders should contribute to organizational success. In addition, agency heads created entities, such as human capital councils, to regularly review their agencies’ human capital strategies and to ensure a data-driven, performance-oriented approach to human capital management. These groups of senior agency officials, including both program leaders and human capital leaders, provide oversight and are accountable for the integration and alignment of the agencies’ human capital approaches. Although the so-called “seat-at-the-table” is significant, human capital leaders are ultimately valued not by place, but by the value they add to the agencies’ strategic human capital approaches in attaining organizational goals. According to a 1999 OPM report on strategic human capital management, there has often been contention between human capital leaders and agency leaders because of human capital’s role as “gatekeeper,” that is, enforcing the law, rules, and regulations. Now, with the responsibility of the human capital function evolving, agency leaders are positioning human capital leaders in roles where they have the opportunity to more directly affect agency decisions and achievement of goals. USCG officials told us that USCG’s Assistant Commandant for Human Resources (HR) is a member of the agency’s senior management team and is a full partner in the development of key USCG management decisions. They stated that because of the Assistant Commandant’s organizational status, USCG’s HR unit has participated earlier in strategic planning and decision making, thus facilitating a smoother transition and execution of ideas. The officials cited USCG’s use of scenario-based planning as an example of early involvement by USCG’s HR unit in agency strategic planning and decision making. Scenario-based planning is a technique used by USCG for managing uncertainty and risk when planning into the future. The agency develops a few plausible future scenarios and then plans how it would best respond to each scenario and what resources it would need to respond. For example, one scenario may describe conditions that imply a greater need to interdict drugs in harbors than another scenario, which may describe a world where the higher priority is to intercept possible terrorist threats from the seas as far offshore as legally possible. Differing competency requirements and operating concepts (e.g., time spent at sea) require different human capital approaches in each of these scenarios. With USCG’s migration to the Department of Homeland Security and its added security responsibilities, the agency’s plans for balancing resources among its many missions will become increasingly important. In the summer of 1998, senior USCG human capital staff members were part of a core group of USCG planners who developed scenarios and constructed the operational and support strategies to succeed in those scenarios. The group eventually created five very different worlds that might exist in the year 2020, along with the "history" of events that led to each of those, based on the combined factors of U.S. economic vitality, the global demand for maritime services, the role of the federal government, and threats to U.S. society. The purpose of the five worlds was to create the boundaries of the possible future, and allow leaders and planners to create a strategy for USCG that would work well in each independent scenario. The core group then analyzed the elements common to all five strategies and crafted a core strategy. The human capital strategies that emerged became part of USCG’s official strategy, and simultaneously drove the human resource organization’s business planning and resource investments for the 2001-2005 time frame. USCG attributes a number of its improved processes to the early involvement of its human capital unit with the agency’s decision-making management team. Specific examples cited by USCG include (1) a significant restructuring of military occupations and career paths to reflect emerging requirements for new competencies, (2) a revision of assignment and reassignment practices to make better use of the investments in training and development, and (3) a restructuring of civilian personnel management functions to better support line managers in meeting their operational requirements. USGS’s HR Office has also played a prominent role in agency strategic planning and decision making. In 1997, USGS’s HR Office led a group of senior USGS managers in developing the first strategic HR plan for USGS as the agency strategically planned how it would remain at the forefront of earth and biological science and technology. USGS’s strategic HR plan formed the basis for the four people goals (skills, rewards, flexibility, and leadership) included in the agency’s current strategic plan. HR staff members were active in the development of the people goals and in the creation of the current and next-generation measures by which the agency is assessing progress under these goals. They were also involved in planning and implementing the strategic human capital initiatives by which USGS hopes to achieve its goals. USGS’s strategic human resources plan describes how USGS will align its people and processes with the business strategies it has adopted to achieve its mission and also recognizes that organizational goals are seldom, if ever, realized without the effective use and support of people. One USGS business strategy is to increase the agency’s flexibility to get work done by using all options other than permanent staff members. The strategic human resources plan states that to enhance USGS’s flexibility to acquire skills to meet short-term needs and provide an influx of new ideas, the HR Office will, for example, expand the use of short-term student and faculty appointments from academia and development agreements from the private sector. SSA’s Deputy Commissioner for Human Resources reports directly to the Commissioner and is an equal partner with the agency’s other deputies. SSA’s human capital officials report that their engagement in the strategic planning activities of the agency has provided a much greater opportunity to contribute to effective outcomes. For example, the human capital organization worked in partnership with SSA’s Office of Finance, Assessment and Management regarding the competitive sourcing initiative of the President’s Management Agenda to ensure there was no undue negative impact on the agency’s human capital and workloads. In addition, SSA’s human capital organization has worked closely with SSA’s Office of Systems in the construction and implementation of the Office of Systems’ major reorganization. The human capital organization believes its efforts have ensured the appropriate mix of employees at the proper levels, and will result in an efficient systems organization. Agency leaders established entities, such as human capital councils, accountable for integrating human capital approaches with program strategies to attain successful program results. Composed of senior agency officials, including both program leaders and human capital leaders, these groups meet regularly to review the progress of the agency’s integration efforts and to make certain that the human capital strategies are visible, viable, and remain relevant. Additionally, the groups help the agencies monitor whether differences in human capital approaches throughout their agencies are well considered, effectively contribute to outcomes, and are equitable in their implementation. IRS, for example, created an entity to ensure a coordinated approach to agencywide human capital issues, policies, and strategies. The agency’s Human Resource Policy Council (HRPC), which meets monthly for approximately half a day, addresses cross-unit human capital issues that cannot be resolved at lower levels. It is composed of the Chief Human Resource Officer and representatives from each of IRS’s major organizations. HRPC is charged with (1) identifying and addressing crosscutting human capital issues and emerging human capital priorities, (2) ensuring that cross-divisional links are in place and operating effectively, (3) making final decisions on all cross-unit human capital issues, (4) providing strategic human capital advice and recommendations to the Commissioner and his senior staff, and (5) addressing the issue of uniformity versus flexibility across divisions for human capital policy. HRPC decided, for example, to eliminate agencywide restrictions regarding fast-track promotion of IRS managers. GSA has a similar group, its Human Capital Council. The council, created in 2002, meets quarterly and, as shown in table 2, consists of human capital leaders, senior executives and officials for the major service and staff offices, and representatives of regional administrators and deputy regional administrators. The council ensures that, among other objectives, the agency's human capital strategic plan is consistent with GSA's strategic plan. As an advocate for human capital initiatives, council members are to ensure that activities in the agency reflect the human capital strategic plan. Another objective of the group is to assist CPO in setting human capital program priorities by assuring that program goals are key determinants of the human capital approach. To support future program priorities, the council, for example, determined what the GSA leadership competencies would be and established the policy and requirements for the GSA-wide Advanced Leadership Development Program. High-performing organizations treat strategic human capital management as fundamental to effective overall management. Human capital leaders in such organizations develop human capital organizations that can fulfill enlarged roles, such as business partner, human capital expert, leader, and change agent, to meet current and future programmatic needs. For example, agency human capital leaders took actions to enlarge the vision of their organizations from being providers of largely transaction-based services to ones whose visions included integrating human capital approaches in agency plans and strategies to successfully accomplish their goals. To align the human capital resources with the organizations’ new visions, human capital leaders often found it necessary to restructure the organizations. Additionally, improving and expanding upon the efficiency of human capital systems and technology offers the opportunity to reallocate additional resources for strategic purposes. Human capital leaders also worked to ensure that the human capital professionals within their agencies were prepared, expected, and empowered to provide a range of consultative and technical services to their internal customers. A human capital strategic vision is crucial in providing a common direction across the organization. Agency human capital leaders we interviewed envisioned their human capital offices becoming more strategically involved with the achievement of agency goals. These leaders communicated their visions to employees and took steps to institute the organizational changes needed to achieve their visions. IRS’s former Chief Human Resource Officer believes that IRS’s human capital professionals must be champions of change and be totally committed to thinking and acting strategically with respect to the agency’s broader mission, its people, and the importance of linking the two together. According to IRS, to make change on the scale needed to enact the vision requires the involvement of management, employees, and the employee unions in virtually every aspect of the transformation. To obtain employee buy-in, IRS’s human capital organization formed work groups to empower its human capital employees to participate in the redesign effort. IRS’s human capital leaders believe the principal contribution of its new human capital organization vision and framework is that it provides a direct focus on developing new and more flexible ways of managing the workforce. For example, IRS has introduced streamlined critical pay authority, developed a category rating process, and instituted a managerial pay banding system. IRS’s officials stated that these actions were undertaken to attract world-class senior leadership and technical talent, simplify and accelerate external and internal hiring, and support organizational delayering in addition to creating a culture of performance and individual accountability. In general, they said the innovations provide top management with a strong and direct connection between human capital strategies and systems and mission results. GSA’s Chief People Officer has a vision for GSA’s CPO to become a partner in GSA’s business success. To do so, she said that CPO must (1) deliver products and services that enable its customers to focus on their core business and (2) develop its workforce to be a valued business partner. GSA’s Chief People Officer said that before the human capital organization can play a bigger role as a business partner it must ensure that its transaction-based tasks are accurately and efficiently processed and that day-to-day problems, concerns, and needs of individual employees that are related to human resources are addressed. To achieve her vision, the Chief People Officer wants to focus more time and resources on CPO becoming a business partner by ensuring that its transaction-based tasks and advisory activities are completed more efficiently. She has placed a high priority on automation and information technology as means to reduce costs and make time available for the business partner role. She has established a Chief Information Officer (CIO) position within GSA’s CPO to support GSA’s human capital functions that are increasingly being driven by technology. The Chief People Officer has shared her vision with agency leadership during presentations before GSA’s administrators. In addition, she has expressed her vision to program leaders during Human Capital Council meetings and has included information on her vision in electronic newsletters sent to CPO staff members. The Chief People Officer also issues periodic written updates on human capital issues that contain information on her vision and related goals. Restructuring the human capital organization is an important step that is often necessary for the transformation of the human capital function. Ideally, this restructuring should help align the organization with its revised vision and should position the human capital function to move from a reactive, process-oriented, and compliance focus to the platform needed to become a proactive, results-oriented, consulting-oriented strategic partner. When we reported on agencies’ initial efforts to restructure their personnel operations in 1998, we noted that the four departments reviewed generally approached the restructuring of their personnel offices with the intent of achieving staff reductions. Some human capital leaders are now focusing less on finding additional internal efficiencies and more on replacing stove- piped structures that include separate units for functions such as staffing and classification, with more flexible structures that support new human capital roles. The right organizational structure can help human capital organizations strategically align with agency objectives and improve the delivery of human capital products and services. Although some federal agencies are restructuring their human capital organizations along similar lines, the right organizational structure depends on the unique characteristics of the agency. In a 1999 report, OPM maintained that because organizations are starting from different positions, they would need to structure their human capital functions based on mission, not on a “one size fits all” solution. In a similar vein, in July 2000 a coalition of individuals representing a wide cross section of organizations with an interest in the federal human capital community anticipated that a variety of human capital organizational structures would form across the federal government. The coalition predicted that factors such as customer needs and agency budgets would design and drive the structure of a particular agency’s human capital organization. The report noted, however, that the group did expect to see movement away from traditional structures toward more flexible arrangements. In a 2001 National Academy of Public Administration (NAPA) report, one of the key findings was that human capital organizations were restructuring, and the report noted that the emerging organizational model appears to consist of three elements: a center of expertise, a shared service center, and a strategic consultant component. The center of expertise provides expert technical advice and assistance to managers and employees while the shared service center processes traditional personnel transactions. The strategic consultant component serves as a strategic partner, change agent, and consultant to agency managers. The elements in the new structure attempt to balance the need for consolidation while still allowing human capital professionals to have a direct connection with their customers. FEMA’s Human Resources Division (HRD) has recently restructured in a manner similar to this emerging organizational model. As shown in figure 1, FEMA’s new HRD structure contains three branches: an advisory services branch, a reconfigured operations branch, and a human capital investment branch. The advisory services branch provides on-site management advisory service and support to FEMA directorates, regional offices, divisions, and branches. The operations branch handles all staffing and selection, classification systems, employee self-service operations, and records processing. The human capital investment branch is designed to take the lead in FEMA’s strategic human capital planning and policy oversight. FEMA officials told us that the agency restructured its human capital organization to meet the emerging requirements of its strategic planning initiatives and to address its inability to respond effectively to growing operational demands. Since the restructuring, HRD has initiated over 30 projects targeting human capital improvements that are aligned to the agency’s strategic plan and the President’s Management Agenda. For example, the improvement initiatives included determining uniform compensation bands based on staff competencies and a series of initiatives focused on improving human capital services throughout the agency. Additionally, in connection with its migration to the Department of Homeland Security on March 1, 2003, FEMA identified a set of 17 “matrix/virtual” teams of HRD and agency leaders to address key transitional issues. IRS also restructured its human capital management function with the same three elements found in the emerging organizational model. The IRS structure includes (1) an embedded human resources organization in each business/functional unit, (2) an agencywide shared services organization, and (3) a national headquarters strategic human resources organization called Strategic Human Resources. Under this arrangement, each operating division has its own human capital office “embedded” within its division. These embedded human capital offices report to the operating division leader and are tasked with formulating, implementing, and customizing human capital policies, procedures, and strategies to fit the business unit’s unique needs. IRS’s agencywide shared services performs an operational mission involving the delivery of common products and services to organizations, managers, and employees across the agency. Strategic Human Resources develops strategic human capital management policies, programs, and strategies in collaboration with a council that includes human capital directors from the major business units. Improved efficiencies and economies of transaction-based services provide the opportunity for agencies to reallocate resources and enable their human capital organizations to meet expanded roles as business partners and change agents. However, as we found in our 1998 report on agencies’ efforts to restructure personnel operations, agencies need to carefully plan and manage the implementation of new technology to fully achieve the desired benefits. USGS has developed OARS, which allows its human capital staff to enter job vacancies into a centralized database and develop rating and ranking criteria by selecting and weighting questions from an extensive question library organized by job series. Applicants register and apply for vacancies on-line. The system immediately rates, ranks, and scores applicants based on their answers to weighted questions, taking into account all of the regulations that govern the federal hiring process. The list of the best- qualified candidates is provided to the hiring manager within several days. USGS’s officials said that OARS has given applicants a quick and easy way to apply for jobs, increased the number of applicants per vacancy between 40 and 500 percent, dramatically reduced the time it takes to fill vacancies, and allowed human capital professionals to refocus their efforts from processing to consulting. USGS hopes to be able to divert an increasing number of its staff members to other strategic efforts as it continues to gain experience and efficiencies using OARS. As mentioned above, GSA has established a CIO for the personnel function located within its Chief People Office. The position was created several years ago when GSA began developing an information technology management system for federal personnel operations. Because GSA’s comprehensive human resources integrated system (CHRIS) required major systems modification and was intended to serve other federal agencies, a senior-level technology position was needed to facilitate the effort. According to CPO’s CIO, his role is to evaluate, develop, and install systems that support GSA’s human capital technology needs. He believes that technology is vital to achieving GSA’s Chief People Officer’s goals. He explained that improved technology and automation efforts mean less staff time is needed for traditional human capital functions, thus allowing the CPO staff members to play bigger roles as business partners. However, because most of GSA’s system is fairly new or still under development, CPO’s CIO has not reduced resources and identified staff savings. In an earlier report, we noted instances where agencies eliminated personnel staff before new technology for automating personnel transactions was in place. This resulted in delays in implementing new personnel and payroll systems. According to CPO’s CIO, resources reallocation will be accomplished as efficiencies are demonstrated. CHRIS is expected to ultimately provide self-service to employees and managers, performance management capability, integrated training solutions, and succession planning. As technology becomes more important to the entire human capital process and productivity drives future GSA staffing, the Chief People Officer believes that by having the CIO in-house, she has needed input into the technology decision making and the capital allocation process for the agency. As the President’s human capital management advisor, OPM has recently been given the responsibility of leading five e-Government initiatives that are designed to use technology to improve the strategic management of the federal workforce. OPM is the managing partner for the Recruitment One- Stop, e-Clearance, Enterprise Human Resources Integration, e-Training, and e-Payroll initiatives. The e-Payroll initiative, for example, is designed to simplify and integrate payroll systems across the federal government. OPM and OMB announced on January 15, 2003, the selection of two payroll partnerships to consolidate federal payroll systems and save the federal government an estimated $1.2 billion over the next decade. The Enterprise Human Resources Integration and e-Clearance initiatives are focused on electronically integrating personnel records across the government and reducing the delays involved in security clearance processing. The full implementation of these two initiatives is scheduled for the end of fiscal year 2006. OPM envisions that the use of technology will help streamline and improve procedures for moving federal employees through the employment life cycle by removing redundancies, reducing response times, eliminating paperwork, and improving coordination among federal agencies. The actions that human capital leaders, and federal agencies in general, are taking to improve their integration of human capital approaches with their missions are reflected, in part, by a definite shift in the roles of human capital professionals throughout the federal government. The occupation is in transition from valuing narrowly focused specialists to requiring generalists, who have all the skills necessary to play an active role in helping to determine the overall strategic direction of the organization. As agencies further integrate strategic human capital approaches into their strategic planning and decision making, investment in the development of new competencies for human capital professionals is receiving more attention. OPM published a study in 1999 establishing a statistical profile of the human capital profession within the federal government. The report described the federal human capital community as a cadre of experts separated into seven distinct occupational series. One series represented the human capital generalist, who typically has a breadth of knowledge about personnel issues. The other six categories consisted of specialists, such as classifiers and staffing specialists, who possess in-depth knowledge in specific human capital areas. The report noted that from 1969 through 1998 there had been a small but noticeable shift occurring in the human capital profession away from specialist positions to generalist positions. In 1998, generalists made up a slight majority of human capital professionals at 53 percent, while specialists positions had declined to 47 percent. Consistent with the changing roles and expectations for human capital professionals, this noticeable shift toward human capital generalists has accelerated since 1998. As of June 2002, generalists made up 73 percent of human capital professionals, while specialists had declined to 27 percent. Figure 2 shows the dramatic change in the percentage of human capital generalists since 1996. A combination of factors appears to have contributed to this shift. According to OPM, it was necessitated, to a large degree, by the significant downsizing in federal office staffing levels during the 1990s, which precluded continuing a specialized approach. In addition, OPM concluded that human capital management as an occupation had been undergoing a significant redefinition. For example, automation began to greatly affect how human capital products and services were delivered. Many agencies began using the Internet or their own intranets to educate managers and employees about human capital programs and options. OPM also noted in a 1999 report that many agencies were beginning to outsource some of their human capital services. With more efficient ways to deliver products and services, human capital professionals could focus on their emerging roles as advisors and consultants, which require more generalist practitioners able to work across multiple human capital functions. The advantage to the agencies of this shift was that existing staff members could be deployed more flexibly and new staff members could be recruited for broader human capital competencies that had less to do with specialized procedures than with general human capital knowledge, concepts, and principles. In fact, OPM concluded in 2000, based on a body of research and evidence, that the human capital occupation was truly becoming generalized in nature both inside and outside the federal government. Although OPM did not find that human capital roles had shifted in every agency, it determined that agencies needed their human capital staff members to leave behind their roles of technical specialists focused on regulatory compliance and to take on more consultative roles. This entailed working with managers, employees, and their representatives to ensure that human capital programs and practices were properly aligned to help the organization meet its strategic objectives, while adhering to merit system principles and other legal obligations. To reflect these changes in the federal human capital community, in December 2000, OPM issued a new consolidated classification standard for the Administrative Work in the Human Resources Management Group. OPM expects the agencies to apply the new job family position classification standard within a reasonable amount of time of its release, as determined by the agency. According to an OPM official, agencies are making progress in applying the new standard. The pressures on human capital professionals to assume new roles present a significant learning and development challenge for human capital staff members. For human capital professionals to begin acting in their new capacities, human capital leaders must ensure that they develop the competencies and gain the experience to effectively take on the expected roles. Consistent with the changes reflected by OPM’s new classification standards, several of the agencies we reviewed have developed human capital competency models designed to develop staff members who can contribute at the strategic and business partner levels as well as design and manage delivery systems, which achieve value-added services at lower costs. GSA recognized the vision of its Chief People Officer and the changing role of its human resources organization and assembled a team of experienced human resources staff members to develop new core competences needed by the CPO staff. This group developed GSA’s new human capital core competencies needed to support its CPO business model and identified five roles that GSA believes its human capital community must play successfully to achieve the Chief People Officer’s vision and to meet the expectations of the GSA customers. The five roles are consultant, leader, technologist, transactional expert, and expert. Specific competencies support each role. In addition, GSA uses its competency model as a recruiting tool for human resources professionals. The model lists desirable attributes for job candidates as well as a few suggested interview questions for determining whether applicants possess these attributes. Table 3 lists GSA’s HR roles and the primary competencies GSA states are essential for success in each of the roles. USGS’s HR Competency Model also identifies and describes a set of human capital competencies that are essential to effective performance in the roles and responsibilities of the agency’s human capital office as outlined in its business model. USGS’s competency model identifies core or universal competencies that USGS human capital staff members need and specific competencies that human capital managers, strategic consultants, operating specialists and generalists, and assistants need. For each competency, the model describes how that competency is applied. According to USGS, the competency model is currently being used as a tool for self-directed human capital development, and portions of the model have been incorporated in the new automated skills assessment system. In the future, USGS plans to use the model as a basis for recruiting and interviewing candidates, making decisions about developmental assignments, and developing the competencies of every human capital staff member to think and relate strategically to the science mission of USGS. Table 4 lists USGS’s competency categories. As the role of the human capital organization evolves to include serving the individual employee and helping to achieve the organization’s strategic objectives, the accountability for human capital management is increasingly being shared by top management, line managers, and human capital professionals. Successful organizations, according to our Model of Strategic Human Capital Management, include human capital professionals acting together with agency leaders and line managers in developing strategic and program plans to accomplish agency goals. Through this joint action, agency and human capital leaders and their staffs share accountability for successfully integrating strategic human capital approaches into the planning and decision making of the agency. Agency and human capital leaders, human capital professionals, and line managers share responsibility for achieving agency programmatic and human capital goals, and they ultimately share accountability for effective, legally compliant human capital management. According to OPM, agencies have delegated more key human capital authorities to line managers. In a recent report, we highlighted the importance of delegating authority and holding line managers accountable for the effective use of human capital flexibilities, important tools that assist agencies in managing their workforces. Agencies are also following a more collaborative approach between managers and human capital professionals for those human capital authorities retained by the human capital staff members. Additionally, more agency and human capital leaders are linking human capital policies and practices to organizational outcomes and expecting more collaboration between line managers and human capital professionals. IRS, for example, has dispersed human capital professionals throughout the agency’s divisions to help apply strategic thinking to each operating division’s unique interest. Human capital staff members are responsible for advising operating division managers on how to best apply human capital strategy to improve results. The human capital professional may have responsibility, for example, for assisting managers in anticipating changes in the labor market and recommending strategies for sustaining a well-qualified and productive workforce. The operating division managers are responsible for rating the performance of the human capital professionals working with them. According to IRS officials, some of the benefits of the shared accountability of human capital professionals working with operating division managers have included customized support for recruitment plans and hiring products and a facilitated merit promotion process. FEMA line managers and human capital staff members have developed a system where the human capital staff works with line managers to quickly identify available employees for deployment as soon as disasters are declared. FEMA employs approximately 2,600 full-time employees and as many as 4,000 temporary and reserve employees who are deployed during federal disasters. In response to this need, the human capital office has implemented the Automated Disaster Deployment System. This automated system allows the staff to track employee credentialing (including knowledge and experience levels and performance ratings), availability, past and present assignment locations, dates of employment, and other vital employee data. According to FEMA officials, the system enables human capital staff and line managers to share accountability for identifying employee training and promotion needs, matching employee expertise with specific disaster site victim needs, and creating a selection routine that rotates available employees, thereby avoiding employee burnout. The process has reduced the time necessary to complete the staffing review and selection from days to hours. Additionally, centralized deployment provides a dedicated staff that acts upon all deployment requests within 3 hours. Line managers have always played a key role in human capital management. They interact with, teach, evaluate, reward, develop, and promote employees from the day they join an organization. According to OPM, many agencies are now pursuing the strategy of delegating key human capital authorities to managers thereby making the delegated authorities within the agency shared responsibilities of the manager and the human capital staff. SSA officials, for example, described how responsibility for recruiting new employees is now shared between human capital professionals and line managers. SSA involves agency line managers in the preliminary recruiting determinations and has human capital professionals take over in the final, technical stages. Some agencies have developed tools and services to help line managers assume more human capital authority. Within our selected agencies, SSA, for example, has a link on its Office of Personnel Web site that contains, “Information for Managers.” The Web site presents a myriad of information, from addressing poor performance to life resources counseling. GSA’s CPO has issued a supervisory desk guide to provide supervisors with basic information on human capital topics. The desk reference guide provides an overview of personnel and administrative practices and procedures that supervisors should know. According to the guide, it is not meant to make supervisors into personnel experts or provide the answers to all personnel-related or administrative questions. The guide is meant to give supervisors basic information on topics such as internal and external recruitment, pay flexibilities, and worker’s compensation that will enable them to handle most situations and to provide references and contacts for more information. As agencies integrate their human capital strategies with their organizational missions, visions, core values, goals, and objectives, they are increasingly recognizing how human capital activities contribute to achieving missions and goals. Congress has recognized this through recent legislation creating a chief human capital officer position in major agencies. Effective human capital integration efforts require the cooperation of management and employees throughout the organization. All members of an organization must understand the rationale for making organizational and cultural changes because everyone has a stake in helping to implement the initiatives as part of the agency’s efforts to meet current and future challenges. In this report, we have identified actions agencies have taken to improve their integration of strategic human capital approaches with their strategies for accomplishing organizational missions and goals. Agency leaders included human capital leaders in key agency strategic planning and decision making. Human capital leaders transformed the agencies’ human capital organizations to better enable them to add value to the strategic activities of the agencies. Working together, agency leaders and human capital leaders have employed human capital professionals and agency line managers to share accountability for successfully integrating strategic human capital considerations into agency planning and decision making. As agencies take action to enhance their ability to meet organizational goals by linking their human capital activities with their strategic planning and decision making, agencies can consider the initiatives we identified. However, each federal agency will have to consider the applicability of specific actions to be taken within the context of its own mission, needs, and culture. We provided a draft of this report to the Director of OPM and to cognizant officials from the individual agencies we visited. OPM and five of the six agencies provided comments on the draft report. All generally agreed with the information presented. Two of the agencies and OPM provided written technical comments to clarify specific points regarding the information presented. Where appropriate, we have made changes to reflect those technical comments. In other cases, agencies provided additional examples of actions they had taken to integrate human capital approaches to attain mission results. OPM noted that the results of the agency initiatives have not been evaluated, which is an important next step for agencies to take, but was not within the scope of this report. USCG noted that they had no comments on the report. We will send copies of this report to appropriate congressional committees, the federal agencies and offices discussed in this report, and the Directors of OPM and OMB. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact me or William Doherty on (202) 512-6806 or on [email protected] and [email protected]. The major contributors to this report were Clifton G. Douglas, Jr. and Judith Kordahl. Mark Braza, Matthew Tropiano, and Laura Turman also made key contributions. Our objective in this self-initiated review was to identify examples of key actions that agencies have taken to integrate their human capital approaches with agency strategies for achieving mission objectives. To address our objective, we identified and focused on six federal agencies that were integrating their human capital approaches. We analyzed agency documents, such as planning and organizational restructuring documents, and previous studies on strategic human capital management. In addition, we conducted semistructured interviews with agency officials, human resources directors, and line managers from our selected agencies that were involved in designing or implementing their agencies’ human capital integration actions. We elicited their experiences and conclusions about the agency actions they believed were most important to the successful integration of their human capital functions. After reviewing and analyzing their responses, we developed a framework to classify and report on the types of actions identified. We did not attempt to independently verify the performance results that agencies attributed to their actions. To select the agencies we reviewed, we first held discussions with human capital experts from American University, the Center for Policy Implementation, George Washington University, and the National Academy of Public Administration. We asked these experts to identify federal agencies that they believed had taken actions to improve the integration of their strategic human capital management functions. We also reviewed our High-Risk Series, the Federal Managers’ Survey, and other documents for examples of federal agency integration efforts not identified by the human capital experts. In addition, we considered the size, mission, and type of workforce of the pool of identified agencies to get a variation of federal agency experiences. We selected six agencies—the Federal Emergency Management Agency, the General Services Administration, the Internal Revenue Service, the Social Security Administration, the U.S. Coast Guard, and the U.S. Geological Survey—to identify examples of human capital integration actions. Our selection process was not designed to provide examples that could be considered representative of all the actions at the agencies reviewed or of the federal government in general. By profiling an agency for a particular action, we do not mean to imply complete success for the action or lack of success for others. We conducted our work from February 2002 through January 2003 in accordance with generally accepted government auditing standards. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to GAO Mailing Lists” under “Order GAO Products” heading.
Successful strategic human capital management requires the integration of human capital approaches with strategies for accomplishing organizational missions and program goals. Such integration allows the agency to ensure that its core processes efficiently and effectively support mission-related outcomes. Based on the recommendations of various human capital experts, GAO identified six executive branch agencies that had taken key actions to integrate their human capital approaches with their strategic planning and decision making. The agencies were the Federal Emergency Management Agency, the General Services Administration, the Internal Revenue Service, the Social Security Administration, the U.S. Coast Guard, and the U.S. Geological Survey. These key actions may prove helpful to other agencies as they seek to ensure that their human capital approaches are aligned with their program goals. The executive branch agencies GAO reviewed have taken key actions to integrate their human capital approaches with their strategies for accomplishing organizational missions and to shift the focus of their human capital office from primarily compliance activities to consulting activities. Agency leaders included human capital leaders in key agency strategic planning and decision making and, as a result, the agencies engaged the human capital organization as a strategic partner in achieving desired outcomes relating to the agency's mission. Human capital leaders took actions to transform the agencies' human capital organizations by establishing clear human capital strategic visions, restructuring their organizations, and improving the use of technology to free organizational resources. Human capital leaders also promoted a transition to a larger strategic role for human capital professionals with their focus being more on consulting rather than compliance activities. The human capital profession is in transition from valuing narrowly focused specialists to requiring generalists, who have all the skills necessary to play an active role in helping to determine the overall strategic direction of the organization. Jointly, agency leaders and human capital leaders are having human capital professionals and agency line managers share the accountability for successfully integrating strategic human capital considerations into agency strategic planning and decision making.
Because large numbers of Americans lack knowledge about basic personal economics and financial planning, U.S. policymakers and others have been focusing on financial literacy, i.e., the ability to make informed judgments and to take effective actions regarding the current and future use and management of money. While informed consumers can choose appropriate financial investments, products, and services, those who exercise poor money management and financial decision making can lower their family’s standard of living and interfere with crucial long-term goals. One vehicle for promoting the financial literacy of Americans is the congressionally created Financial Literacy and Education Commission. Created in 2003, the Commission is charged with (1) developing a national strategy to promote financial literacy and education for all Americans; (2) coordinating financial education efforts among federal agencies and among the federal government, state and local governments, non-profit organizations, and private enterprises; and (3) identifying areas of overlap and duplication among federal financial literacy activities. To minimize financial burdens on servicemembers, DOD has requested and Congress has increased cash compensation for active duty military personnel over the last 5 years. For example, the average increases in military basic pay have exceeded the average increases in private-sector wages for each of the past 5 years. Also, DOD has a plan to totally eliminate out-of-pocket expenses that servicemembers pay when living in private-sector housing from 19 percent in fiscal year 2000 to zero in fiscal year 2005. Furthermore, in April 2003, Congress increased the family separation allowance from $100 to $250 per month and hostile fire/imminent danger pay from $150 to $225 per month for eligible deployed servicemembers. The family separation allowance is designed to provide compensation for servicemembers with dependents for the added expenses incurred because of involuntary separations such as deployments in support of contingency operations like Operation Iraqi Freedom. The expenses include extra childcare costs, automobile maintenance, or home repairs the deployed servicemember would normally do while home. Hostile fire/imminent danger pay provides special pay for “duty subject to hostile fire or imminent danger” and is designed to compensate servicemembers for physical danger. Iraq, Afghanistan, Kuwait, Saudi Arabia, and many other nearby countries have been declared imminent danger zones. In addition to these special pays, some or all income that active duty servicemembers earn in a combat zone is tax free. Since at least the 1980s, the military services have offered PFM programs to help servicemembers address their financial conditions. Among other things, the PFM programs provide financial literacy training to servicemembers, particularly to junior enlisted personnel during their first months in the military. The group-provided financial literacy training is supplemented with other types of financial management assistance, often on a one-on-one basis. For example, servicemembers might obtain one-on- one counseling from staff in their unit or legal assistance attorneys at the installation. In May 2003, the Office of the Under Secretary of Defense for Personnel and Readiness, DOD’s policy office for the PFM programs, established its Financial Readiness Campaign, with objectives that include increasing personal readiness by, among other things, (1) increasing financial awareness and abilities and (2) increasing savings and reducing dependence on credit. The Campaign attempts to accomplish these objectives largely by providing on-installation PFM program providers with access to national-level programs, products, and support through links from DOD’s Web site (www.dodpfm.org) to other Web sites, tools, and contacts. Figure 1 illustrates some of the major types of financial management training and assistance available to servicemembers (see app. III for additional details). For instance, most active duty military installations have an on-site manager who implements the service’s PFM programs. Among other things, PFM program managers and others teach classes and offer counseling on financial issues, ranging from basic budgeting and checkbook management to purchasing a car. In addition, the PFM program managers might work closely with the services’ relief/aid societies. The relief/aid societies offer grants or no interest loans for emergency situations. Figure 1 also shows that servicemembers may choose to use non-DOD resources if, for example, they do not want the command to be aware of their financial conditions or they need products or support not offered through DOD, the services, or the installation. DOD-wide survey data suggest that the financial conditions of deployed and non-deployed personnel are similar, but problems were found with the administration of a special pay to deployed personnel, as well as the ability of deployed servicemembers to communicate with creditors. Servicemembers who were deployed for at least 30 days reported similar levels of financial health or problems as those who had not deployed when they responded to a 2003 DOD-wide survey. However, some deployed servicemembers are not obtaining their family separation allowance on a monthly basis while they are deployed and separated from the families. And, problems communicating with creditors—caused by limited Internet access, few telephones and high fees, and delays in receiving ground mail— can affect deployed servicemembers’ abilities to resolve financial issues. Data from DOD suggest that the financial conditions for deployed and non- deployed servicemembers and their families are similar. Figure 2 shows estimates of servicemembers’ financial conditions based on their responses to a 2003 DOD-wide survey. For each of the five response options, the findings for servicemembers who were on a deployment for at least 30 days were very similar to those of servicemembers who had not deployed during that time. An additional analysis of the responses for only junior enlisted personnel showed similar responses for the two groups. For example, 3 percent of the deployed group and 2 percent of the non-deployed group indicated that they were in “over their heads” financially; and 13 percent of the deployed group and 15 percent of non-deployed group responded that they found it “tough to make ends meet but keeping your head above water” financially. These responses are consistent with the findings that we obtained in a survey of all PFM program managers and during our 13 site visits. In the survey of PFM program managers, about 21 percent indicated that they believed servicemembers are better off financially after a deployment; about 54 percent indicated that the servicemembers are about the same financially after a deployment; and about 25 percent believed the servicemembers are worse off financially after a deployment. Also, 90 percent of the 232 recently deployed servicemembers surveyed in our focus groups said that their financial situations either improved or remained about the same after a deployment. The special pays and allowances that some servicemembers receive when deployed, particularly to dangerous locations, may be one reason for the similar findings for the deployed and non-deployed groups. The hypothetical situations shown in table 1 demonstrate that deployment- related special pays and allowances can increase servicemembers’ total cash compensation by hundreds of dollars per month. Moreover, as we noted previously in the Background section of this report, some or all income that servicemembers earn while serving in a combat zone is tax free. The 2003 DOD survey also asked servicemembers whether they had experienced various types of negative financial events. The differences in percentages were small between the deployed and non-deployed groups. As figure 3 shows, the largest of the three differences was 4 percentage points and pertained to falling behind in paying bills. Based on DOD data for January 2005, almost 6,000 of 71,000 deployed servicemembers who have dependents did not obtain their family separation allowance in a timely manner. The family separation allowance of $250 per month is designed to compensate servicemembers for extra expenses that result when they are involuntarily separated from their families. Servicemembers in our focus groups told us that the family separation allowance helps their families with added costs encountered during their absence such as childcare costs, automobile maintenance, and home repairs. Delays in obtaining family separation allowances could cause undue hardship for some families faced with such extra expenses. Table 2 shows the amount of family separation allowance received in January 2005 by servicemembers who were deployed and receiving hostile fire pay. No Marines received more than the prescribed $250 monthly allowance for January, but approximately 10 percent of the Army and Navy servicemembers and nearly 5 percent of the Air Force personnel who were entitled to the $250 monthly allowance received more than that prescribed amount. This indicates that servicemembers for three of the services had not received the $250 allowance on a monthly basis and were given catch- up, lump sum payments. In total, almost 6,000 servicemembers received more than the prescribed $250 monthly allowance, with 11 servicemembers (1.5 percent) receiving a $3,000 catch-up, lump sum payment—the equivalent of 12 months of family separation pay. We have previously reported similar findings for the administration of family separation allowance to Army Reserve soldiers and recommended that the Secretary of the Army, in conjunction with the DOD Comptroller, clarify and simplify procedures and forms for implementing the family separation allowance entitlement policy. The services have different procedures that servicemembers must perform to obtain the family separation allowance, and some of these procedures are confusing and are not always followed. For example, an Army regulation states that soldiers must file a DD Form 1561 (Statement to Substantiate Payment of Family Separation Allowance) to substantiate eligibility to receive the allowance, along with a copy of the travel voucher to indicate the period of entitlement—which implies family separation allowance is received after deployment because substantiating documents are generally provided upon completion of travel with a voucher. The Army’s pay manual, however, states that only a DD Form 1561 is required to receive family separation allowance. Officials at the Defense Finance and Accounting Service and Army Finance Office stated that, although they were following this regulation, they were requiring the DD Form 1561 prior to departure so soldiers could receive family separation allowance during deployment, which is contrary to the Army regulation. In contrast, Defense Finance and Accounting Service procedures for Air Force servicemembers state that servicemembers may substantiate eligibility to receive family separation allowance prior to departure, using the travel order and the DD Form 1561. By using the travel order, Air Force servicemembers can receive family separation allowance during deployment. However, elsewhere in the Defense Finance and Accounting procedures, it notes that most Air Force members are paid family separation allowance upon returning from deployment. In April 2003, Air Force officials attempted to clear up any confusion over how Air Force personnel should initiate payments of family separation allowance, by sending a message to a Defense Finance and Accounting official stating that family separation allowance paperwork should be filed before servicemembers depart for deployment. Despite this subsequent change, Air Force servicemembers in our June 2004 focus group noted that they had not received the family separation allowance during their deployments. An August 2004 message from the Defense Finance and Accounting Service reminded Air Force finance officials of this policy change. DOD officials suggested many factors other than policy-implementation differences to explain why some eligible servicemembers are not receiving their family separation allowance on a monthly basis. Officials at the Defense Finance and Accounting Service and at service finance offices suggested that servicemembers might not obtain the allowance monthly because they are not aware of the benefit, they do not file the required eligibility form, they file incorrect documentation, or errors or delays occur when the unit enters the information into the pay system. Others noted that servicemembers may elect to receive the allowance as a one-time lump sum payment. Servicemembers may experience financial difficulties as a result of communication constraints while deployed. In our March 2004 testimony, we documented some of the problems associated with mail delivery to deployed troops. With regard to deployed servicemembers’ financial management, our focus group participants, surveyed PFM program managers, and interviewed installation officials noted that delays in receiving correspondence from creditors have resulted in late payments and possibly longer-term problems for servicemembers. The longer-term problems might include negative information about the late payments being entered in one’s financial credit report, which could make it more difficult or expensive for servicemembers to obtain credit in the future. Similarly, limited access to telephones or Internet can have negative financial effects such as (1) delaying or preventing contact with a creditor when a financial issue arises, (2) making it impossible to electronically transfer money from a financial institution to a creditor, and (3) incurring overdraft expenses because the spouse could not be informed in a timely manner about a cash advance that the servicemember requested. Individuals in our focus groups suggested that the access to Internet and telephones may not be the same across the pay grades and services. For example, some servicemembers noted that deployed junior enlisted personnel sometimes had less access to Internet than did senior deployed personnel, making it difficult for the former to keep up with their bills. In addition, some Army servicemembers told us that they (1) could not call stateside toll-free numbers because the numbers were inaccessible from overseas or (2) incurred substantial costs—sometimes $1 per minute—to call stateside creditors. In contrast, Air Force servicemembers in Germany said that the cost of calls to stateside creditors from Iraq or Afghanistan was not an issue for them because the Air Force had provided telephone calling cards that could be used to make such calls free of charge. Failure to avoid or promptly correct financial problems can result in negative consequences for servicemembers. This includes increased debt for servicemembers, bad credit histories, and poor performance of their duties when distracted by financial problems. In addition, servicemembers who cannot stay on top of their finances, while deployed, may require assistance from officials in their chain of command to address financial problems, which takes those officials from their normal military duties. This can translate into adverse effects on a unit’s readiness and morale. DOD lacks the results-oriented, departmentwide data needed to assess the effectiveness of its PFM programs and provide the necessary oversight. The principles of the Government Performance and Results Act of 1993 offer federal agencies a methodology to establish a results-oriented framework that includes strategic plans for program activities that identifies, among other things, program goals, performance measures, and reporting on the degree to which goals are met. These principles would assist DOD in shifting the focus of accountability for its PFM program from outputs, such as the number of training classes provided, to outcomes, such as impact of training on servicemembers’ financial behavior. The November 2004 DOD instruction that provides guidance to the services on servicemembers’ financial management does not address program evaluation or the reports that services should supply to DOD for its oversight role. However, an earlier draft of the instruction included these requirements. In our 2003 report, we noted that the earlier draft instruction emphasized evaluating the programs and cited metrics such as the number of delinquent government credit cards, servicemembers with wages garnished, and administrative actions for financial indebtedness and irresponsibility taken under the Uniform Code of Military Justice. When asked what caused the evaluation and oversight reporting requirements to be dropped from the finalized instruction, DOD officials said that they were eliminated because of objections voiced by the services. The DOD officials told us that the services did not want the additional reporting requirements. DOD’s 2002 Social Compact noted that the impact of efforts to improve financial literacy cannot be determined without effective evaluation. The Social Compact also stated that a systematic approach to measuring PFM programs is needed to identify best practices and improved program performance. Currently, the only DOD-wide evaluative data available for assessing the PFM programs and servicemembers’ financial conditions are obtained from a general-purpose annual survey that focuses on the financial conditions of servicemembers as well as a range of other non- related issues. The data are limited because DOD policy officials for the PFM programs can only include a few financial related items to this general-purpose survey. Additionally, a response rate of 35 percent on the March 2003 active duty survey leads to questions about the generalizability of the findings. Furthermore, DOD has no means for confirming the self- reported information for survey items that ask about objective events such as filing for bankruptcy. Without a policy requiring common evaluation DOD-wide and reporting relationships among DOD and the services, DOD will continue to have limited oversight to make improvements in the PFM programs and limited ability to achieve a standardized evaluation system. In addition, Congress will not have the visibility or oversight it needs to address issues related to DOD’s financial management training and assistance to servicemembers. Currently, service-specific efforts to assess the PFM programs are largely in their early stages. The services told us that they are developing outcome measures for evaluating their PFM programs, but none was operational at the time of our review. In Spring 2005, the Navy plans to develop and refine Navy-wide metrics such as the number of sailors performing good and poor financial behaviors, e.g., participating in the government’s retirement plan, filing for bankruptcy, and bouncing checks. Similarly, in the third quarter of fiscal year 2005, Army officials said they expect to implement outcome measures for assessing programs such as Financial Readiness, Family Advocacy, and Relocation Readiness. The Marine Corps and Air Force did not provide details for their plans to develop results-oriented data or indicate when evaluation systems would be operational. Additionally, our visits to 13 installations in the United States and Germany revealed much variability with regard to the use of performance metrics. The installations that provided us with their metrics often used output measures such as the number of people trained, rather than results-oriented outcome measures. Some junior enlisted servicemembers are not receiving the required PFM training. While each of the services implements PFM training differently, all of the services have policies requiring that PFM training must be provided to junior enlisted servicemembers. At the time of our review, the services’ policies varied on where and when the initial training should occur. For example, the Army, Marine Corps, and Air Force regulations required the training at the servicemembers’ first duty station; however, the Navy guidance required such training prior to the servicemembers’ first duty station. Despite having these policies, some servicemembers have not received the required training, but the extent to which the training is not received is unknown because servicewide totals are not always collected. Table 3 shows how each service monitors PFM training. The Marine Corps, for example, only tracks PFM training at the unit level and does not tabulate these data for a servicewide total. As shown in the table, the Army was the only service that collected installation-level PFM data and could provide a rough servicewide estimate of PFM training completed by junior enlisted servicemembers. Overall, the Army estimates that about 82 percent of its junior enlisted soldiers completed PFM training in fiscal year 2003, leaving 18 percent who did not receive training. PFM program staff at five of the six Army installations we visited told us that required PFM training was not being provided to all first-term soldiers. Some of the senior Army officers at these installations acknowledged the need to provide the PFM training to junior enlisted servicemembers but also noted that current deployment schedules limited the time available to prepare soldiers for their warfighting mission. The officers said they believed that improving servicemembers’ ability to perform duties related to their mission (e.g., firing a weapon) was more important than improving their personal financial literacy. In addition to how the services monitor servicemembers’ completion of PFM training, table 3 also shows that the services’ requirements for PFM training for junior enlisted personnel differ on three other characteristics: where the requirements are documented, the length of training, and when the training is administered. The Navy is the only service that specifies in servicewide regulations the number of hours of PFM training that junior enlisted servicemembers must complete. The oversight office for the Army identified the number of hours of required PFM training for first-term soldiers in a 1998 memorandum to the Army Chief of Staff. The Air Force and Marine Corps do not specify the number of hours in servicewide regulations or other documents. The Navy’s required length of PFM training for junior enlisted servicemembers is 4 hours longer than the Army requirement. The Air Force and Marine Corps have no minimum requirement pertaining to the length of the PFM training provided on its installations. The services use different schedules for identifying when PFM training is to be administered. PFM managers noted that these schedules take into account service-specific constraints, such as the length of time available for PFM training at servicemembers’ first duty station. Top-level DOD officials have stated repeatedly that financial issues have a direct effect on servicemembers’ mission readiness and that the lack of basic consumer skills and training in finances sets the stage for financial difficulties. For example, we reported in 2003 that a 2002 Navy report to Congress had identified $250 million in productivity and salary losses due to poor personal financial management by servicemembers. Therefore, units whose servicemembers do not receive required PFM training risk jeopardizing their ability to meet mission requirements. Some services are taking steps to improve their monitoring of PFM training. During the second quarter of 2005, the Army officials said they hope to implement Army’s Client Tracking System that will allow the service as well as current and future installations to track the financial counseling and training that servicemembers receive. The Marine Corps is updating its order on personal services and developing a system to track financial management training. While such steps may improve the monitoring of PFM training completion—an important output—they still do not address the larger issues of training outcomes such as whether or not PFM training helps servicemembers to manage their finances better. Although DOD-wide data show that the financial conditions for deployed and non-deployed servicemembers and their families are similar, some deployed servicemembers experience delays in obtaining their monthly family separation allowance. Not receiving this compensation each month to help defray extra household costs incurred when the servicemembers are deployed can result in financial hardship for the servicemembers’ family. Without changes to the administration of the family separation allowance, DOD risks placing a further financial strain on servicemembers. In addition, problems communicating with creditors during deployment can cause financial difficulties for servicemembers. Limited Internet access, delays in ground mail, and the high cost of calling from overseas often prevent servicemembers from promptly contacting creditors when financial issues arise. Delays in responding to creditors can result in serious consequences, including bad credit ratings for the servicemembers and adverse effects on unit readiness and morale. While DOD states in its Social Compact that a standardized evaluation system to measure the effectiveness of the PFM programs is a desired goal, the department does not have an oversight framework that includes the performance measures and reporting requirements needed to fully measure results from its programs. In addition, the absence of evaluation and reporting requirements in DOD’s newly issued instruction on personnel financial management suggests that DOD will continue to have limited visibility and oversight over the PFM programs and little ability to require standardized assessments of the PFM programs. These deficiencies, in turn, will limit Congress’ ability to address issues related to DOD’s PFM programs. While DOD and service officials have acknowledged that the lack of PFM training sets the stage for servicemembers having financial difficulties later, high deployment levels limit the time available for some servicemembers to take the PFM training. The absence of servicewide systems for monitoring the completion of this required training could result in some servicemembers never being provided such training if they are unable to take it at the prescribed time. Moreover, the lack of a monitoring system also will hamper efforts to improve PFM training since it will be impossible to establish a measurable relationship between whether or not someone completed training and how well they subsequently managed their finances. To address issues related to servicemembers’ financial management, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Personnel and Readiness to take the following four actions: Take the necessary steps, in conjunction with the Defense Finance and Accounting Service and the services, to ensure servicemembers receive family separation allowances on a monthly basis during deployments. These steps might include those recommended in our prior review of Army Reserve pay, such as clarifying and simplifying procedures and forms implementing family separation allowance entitlements or having DOD and the operational components of the services work together to ensure family separation allowance entitlement eligibility form is received by the Defense Finance and Accounting Service to start the allowance when the servicemember is entitled to it. Identify and implement, with the services, steps that can be taken to allow deployed servicemembers better communications with creditors. These steps may include increasing Internet access and providing toll- free telephone access for deployed servicemembers when they need to address personal financial issues. Develop and implement, in conjunction with the services, a DOD-wide oversight framework with a results-oriented evaluation plan for the PFM programs and formalize DOD’s oversight role by including evaluation and reporting requirements in the PFM instruction. Require the services to develop and implement a tactical plan with time- based milestones to show how the appropriate service policy office will monitor financial management training and thereby ensure that junior enlisted servicemembers receive the required training. On March 17, 2005, we provided a draft of this report to DOD for review and comment. As of the time this report went to final printing, DOD had not provided comments as requested. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time we will provide copies of this report to interested congressional committees and the Secretary of Defense. We will also make copies available to others upon request. This report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-5559 ([email protected]) or Jack E. Edwards at (202) 512- 8246 ([email protected]). Other staff members who made key contributions to this report are listed in appendix IV. In addressing the objectives of our engagement, we limited our scope to active duty servicemembers because we have previously issued a number of reports on the compensation, benefits, and pay-related problems of reservists. Emphasis was placed on servicemembers who had returned from a deployment within the last year because these individuals were most likely to have recent personal knowledge of deployment-related financial issues, as well as familiarity with financial issues of servicemembers serving on installations in the United States. During the course of our work, we visited 13 installations with high deployment levels, as identified by service officials (see table 4). During these site visits to installations in the United States and Germany, special emphasis was given to ascertaining the financial conditions of junior enlisted servicemembers because DOD and service officials have reported that this subgroup is more likely to encounter financial problems. To address the extent to which there is a financial impact of deployment on active duty servicemembers and their families, we reviewed and analyzed laws, policies, and directives governing military pay, such as the Servicemembers Civil Relief Act and DOD’s Financial Management Regulation 7000.14R, Volume 7A, as well as documents related to the tax treatment of military pay, including the Internal Revenue Service Armed Forces’ Tax Guide: For Use in Preparing 2003 Returns. We also reviewed and analyzed GAO reports on military compensation and deployment and reports from other agencies, including DOD, the Congressional Research Service, and the Congressional Budget Office. We contacted the Federal Trade Commission to ascertain what data were available through Military Sentinel on servicemembers’ financial conditions and complaints. We conducted focus groups and surveyed servicemembers and spouses and held individual interviews with PFM program managers, non- commissioned officers, and legal assistance attorneys at installations we visited to obtain their perspectives on the impact of deployment on servicemembers. We also compared and contrasted results of our survey of servicemembers and spouses with data obtained through DOD-wide active duty surveys from 2003 for face validity and to identify trends and other indicators of financial impact. We assessed the reliability of survey data that DOD uses to obtain information on the financial conditions of servicemembers and their families. The March 2003 survey had a response rate of 35 percent. DOD has conducted and reported on research to assess the impact of this response rate on overall estimates. They found that, among other characteristics, junior enlisted personnel (E1 to E4), servicemembers who do not have a college degree, and members in services other than the Air Force were more likely to be non-respondents. We have no reason to believe that potential non-response bias not otherwise accounted for by DOD’s research is substantial for the variables we studied in this report. Therefore, we concluded the data to be sufficiently reliable to address our objectives. Additional perspectives regarding the financial impact of deployment were obtained in interviews with DOD and service policy officials. Still other perspectives were obtained from installation officials using the structured interviews and an e-mail survey to all PFM program managers. This information was supplemented with information obtained from three group discussions with a total of 50 personnel affiliated with the PFM programs while they attended a November 2004 conference. We also reviewed family separation allowance data from the Defense Finance and Accounting Service for servicemembers who were deployed and receiving imminent danger pay in January 2005. To facilitate the data gathering process for all three questions, we developed and pre-tested four types of data collection instruments. The content of the instruments was identified through review of policies, reports, and other materials, and from interviews with DOD and service officials. Structured questionnaires and focus group protocols were used to increase the likelihood that the questions were asked and procedures were conducted in a standardized manner, regardless of which GAO analyst conducted the interviews and focus groups during the 13 site visits. While the interviews and focus groups provided valuable qualitative data to illustrate important issues, the findings were not generalizeable to the population of all active duty servicemembers because of the small non-random samples of personnel who participated in the data collection sessions. Separate structured interview protocols were created for seven types of officials: installation commanders, PFM program managers, senior non-commissioned officers (E8 to E9), legal assistance attorneys, chaplains, command financial specialists, and officials representing service relief/aid societies. While some of the questions were the same or very similar for some issues, the content of the structured interviews was tailored to the type of official interviewed. A single focus group protocol, with seven central questions and follow-up questions, was used to solicit information from each of the four types of homogeneous groups: junior enlisted servicemembers (E1 to E4), non-commissioned officers (E5 to E9), company-grade officers (O1 to O3), and spouses of servicemembers who had recently returned from deployments. An anonymous survey was administered at the beginning of each focus group to obtain specific, sensitive (e.g., financial difficulties experienced by the servicemembers and their families) information that focus group participants might not feel comfortable discussing with other servicemembers present. Administering the survey before the focus group questions were asked allowed us to quantify participants’ perspectives and situations, without the servicemembers being influenced by the subsequent discussions. An e-mail survey was administered to the DOD-wide population of 225 PFM program managers identified by service officials. The response rate for the survey was 74 percent. Because we surveyed the population of PFM program managers and obtained a sufficiently high response rate, the findings from this survey are generalizeable to the population of all PFM managers. To assess the adequacy of DOD’s oversight framework for evaluating military programs that assist both deployed and non-deployed servicemembers in managing their personal finances, we reviewed DOD’s, the services’, and selected installations’ PFM program policies, along with DOD’s strategic and tactical plans for implementing the PFM programs. In addition, we reviewed DOD’s 2002 report on Personal and Family Financial Management Programs submitted to the House of Representatives Armed Services Committee. The Government Performance and Results Act of 1993 and Standards for Internal Control in the Federal Government provided model criteria for determining the adequacy of the oversight framework. We gathered perspectives about the outcome measures to evaluate the PFM programs from DOD and service-level officials, along with responses from the previously mentioned discussion groups at the November 2004 conference and the DOD-wide survey of PFM managers. We reviewed and analyzed data related to the effectiveness of the PFM programs from DOD-wide active duty survey conducted in 2003. We also reviewed accreditation reports for installation PFM programs, where available, and other materials documenting the use or effectiveness of PFM programs. Finally, we attended a GAO-sponsored forum in November 2004, in which a select group of individuals with expertise in financial literacy and education developed recommendations on the role of the federal government in improving financial literacy among consumers. To assess the extent to which DOD and the services provide PFM training to junior enlisted servicemembers, we examined the regulations and other materials that document PFM training requirements such as the number of hours of training provided and when the training should occur. We reviewed DOD’s, the services’, and selected installations’ PFM training materials, and procedures for monitoring completion of the training. We also reviewed reports issued by GAO, DOD, and other organizations that addressed the PFM programs or the content and delivery of similar programs designed to either increase financial literacy or address financial problems. Additionally, we interviewed service headquarters, as well as installation PFM officials about required training for junior enlisted servicemembers and how it is administered and monitored. The e-mail survey that was administered by GAO to the DOD-wide population of 225 PFM program managers is not subject to sampling error since it was sent to the universe of PFM program managers. With a response rate for the survey of 74 percent and no clear differences between respondents and non-respondents, the findings from this survey are generalizeable to the population of all PFM managers. Our PFM survey had differential response rates that were as low as 65 percent for the Air Force and as high as 89 percent for the Navy. The questionnaire provided to focus group participants was to gather supplemental information only and is not generalizable to DOD, but rather to those who participated in our focus groups only. Because DOD surveyed a sample of servicemembers in its 2003 active duty survey, their results are estimates and are subject to sampling errors. However, the practical difficulties in conducting surveys of this type may introduce other types of errors, commonly known as non-sampling errors. Non-sampling errors can include problem(s) with the list from which the sample was selected, non-response in obtaining data from sample members, and/or inadequacies in obtaining correct data from respondents. These errors are in addition to the sampling errors. In this survey, the response rate was 35 percent. The estimates obtained from the respondents will differ from the population value to the extent that values for non-respondents are different, in the aggregate, from values for respondents. We conducted in-depth pre-testing of the PFM program manager survey, as well as the questionnaire disseminated to focus group participants, to minimize measurement error. However, the practical difficulties in conducting surveys of this type may introduce other types of errors, commonly known as non-sampling errors. For example, measurement errors can be introduced if (1) respondents have difficulty interpreting a particular question, (2) respondents have access to different amounts of information in answering a question, or (3) those entering raw survey data make key-entry errors. We took extensive steps to minimize such errors in developing the questionnaire, collecting the data, and editing and analyzing the information. For example, we edited all surveys for consistency before sending them for key-entry. All questionnaire responses were double key- entered into our database (that is, the entries were 100 percent verified), and a random sample of the questionnaires was further verified for completeness and accuracy. In addition, we performed computer analyses to identify inconsistencies and other indicators of errors. DOD also pre-tested its questionnaire to minimize measurement error and performed analysis to assess non-response error. We performed our work from March 2004 through February 2005 in accordance with generally accepted government auditing standards. We held focus group sessions at the 13 military installations we visited during the course of this engagement to obtain servicemembers’ perspectives on a broad range of topics, including the impact of deployment on servicemembers’ finances and the types of lenders military families use, along with the PFM training and assistance provided to servicemembers by DOD and service programs (see app. I for a list of installations visited). Servicemembers who participated in the focus groups were divided into three types of groups: junior enlisted personnel (E1 to E4), mid-grade and senior enlisted personnel (E5 to E9), and junior officers (O1 to O3). Although we requested to meet with servicemembers who had returned from a deployment within the last 12 months, some servicemembers who had not yet deployed also participated in the focus groups. At some installations, we also held separate focus groups with spouses of servicemembers. Typically, focus groups consisted of 6 to 12 participants. We developed a standard protocol, with seven central questions and several follow-up questions, to assist the GAO moderator in leading the focus group discussions. The protocol was pre-tested during our first installation visit and was used at the remaining 12 installations. During each focus group session, the GAO moderator posed questions to participants who, in turn, provided their perspectives on the topics presented. We essentially used the same questions for each focus group, with some slight variations to questions posed to the spouse groups. We sorted the 2,090 summary statements resulting from the 60 focus groups into categories of themes through a systematic content analysis. First, our staff reviewed the responses and agreed on response categories. Then, two staff members independently placed responses into the appropriate response categories. A third staff member resolved any discrepancies. Below, we have identified the seven questions and sample responses/statements associated with each question. The themes and the number of installations for which a statement about a theme was cited are provided in italics. Also, two examples of the statements categorized in the theme are provided. Only those themes cited at a minimum of three installations are presented. The number of installations—rather than the number of statements—is provided because (1) the focus of this engagement was on DOD-wide issues and (2) a lengthy discussion in a single focus group may have generated numerous comments. 1. How has deployment affected military families financially in your unit? 1.a. Other reason deployment affects families financially (N=13) Example: Financial problems stem from relationship problems. Many Marines file for divorce when they return from a deployment. Example: Another sailor said they have to buy a lot of supplies, such as stocks of deodorant and other toiletries, to take on the deployment. The government does not pay for those supplies. 1.b. Better financially – increased income (N=13) Example: A soldier stated that his family was barely making ends meet when he left for a deployment. However, when he returned, his wife had paid off all of the bills and saved some of the money. He and his wife look forward to deployments as a way to catch up on expenses and savings. Example: Some cited receiving additional hazardous/combat duty pay and attendant tax exemptions during deployment as reasons for the financial benefits. In addition, some servicemembers mentioned that they no longer had to pay rent and incur related household expenses such as food and other household goods while deployed. The additional money allowed families to pay off debts and outstanding bills. 1.c. Worse financially – increased needs (e.g., childcare and transportation) (N=12) Example: Deployment worsens some servicemembers’ finances because childcare expenses increased. In many instances, to avoid having childcare expenses, one parent will work during the day and one during the night. When the servicemember deploys, the remaining spouse must find suitable daycare for the children. This is an added expense the deployment forces on the family. Example: During a deployment there are more expenses because the spouse has to pay for things that the servicemember would usually do personally, like house and car repairs. 1.d. Worse financially – other (N=11) Example: The military encourages soldiers to obtain a power of attorney before they deploy, but the power of attorney gives the spouse access to all of the soldier’s finances. In many cases, the spouse has used this power to spend all of the soldier’s money. One soldier returned from his deployment to find that he only had $80 left in his bank account. Example: One unmarried soldier said he was 5 months behind in paying his bills because he’s single and did not have anyone to help him while he was deployed. 1.e. No change financially because of deployment (N=11) Example: Overall, servicemembers are not really making more money when they are deployed. The additional pay and allowances make up for the increased spending that a family must do when the servicemember is not at home. Example: Another servicemember stated that she was a single parent and had to send her child back to the west coast with her parents. She stated she came out about even financially because the extra money she made was spent on the additional expenses to care for her son. 1.f. Effect issue – servicemember has dependents (N=11) Example: Single parents face an entirely different set of issues during a deployment. For example, in many cases, the member will be the only parent for a child; therefore, when that member is deployed long-term childcare must be arranged. In most situations, the member will arrange for an immediate or extended family member to assume the childcare responsibilities. Example: Some Navy servicemembers said that the status of personal finances during a deployment will vary based on the marital status of the sailor. For example, sailors with dependents will collect more entitlements than those who are single. 1.g. Worse financially – increased wants (N=11) Example: Some soldiers were buying expensive cars with their deployment pays. However, when the servicemembers returned from deployment to their regular pay they were not able to afford their deployment standards of living because the increase in income and tax free status no longer applied. Example: The spouse may be depressed during the deployment and spend the money the soldier is being paid. In these cases, they have no one around telling them to save it or to pay the bills. They shop to fight the depression and to make themselves feel better. 1.h. Better financially – other (N=10) Example: In some cases, the family’s finances actually improve because the spouse takes control of the bills during the deployment. Example: Another participant stated that she and her husband are more financially responsible now compared to when they were younger. Thus, they are able to benefit more from the monetary benefits of deployment. 1.i. Effect issue – personal ability to manage money (N=9) Example: Poor post-deployment spending habits (e.g., buying a new expensive car) of some single servicemembers caused them to lose extra income earned during deployment. This left them with more debt than before they left for the deployment. Example: In many cases, it is when the soldier returns from the deployment that families will get into financial troubles. During the deployment, there is a significant increase in pay and an increase in spending. After the deployment, the servicemember’s pay returns to normal and the family may have trouble dealing with the loss of income, which can encourage increased debt. 1.j. Effect issue – servicemember does not have dependents (N=7) Example: Single servicemembers seemed to fare better financially because they do not incur the same expenses as married couples, such as childcare and transportation costs. The single member is more likely to be living with roommates and when deployed, he/she only has a small amount to pay for rent. The married servicemember, on the other hand, still has a mortgage to pay back home, along with the additional expenses previously mentioned. Example: Single servicemembers are better off financially because they only have to take care of themselves financially. 1.k. Effect issue – where deployed (N=6) Example: The effect on finances depends on the location to which a servicemember is deployed. The pay and allowances that a soldier receives vary from location to location. In some places, soldiers can make a lot of money; in others, they will not. Example: The financial impact of deployment depends on where an officer was deployed. In South Korea, servicemembers pay taxes and do not receive extra pay, as did those who served in combat zones. In addition, individuals deployed to South Korea lost their Basic Allowance for Housing, even though they needed it while deployed. The officer needed to live off base because of a lack of housing on base there. This meant paying for two households, one on deployment and one for the spouse and children at home. 1.l. Worse financially – loss of income (N=5) Example: Some spouses mentioned that they know of some soldiers that had to give up their second jobs when they left on the deployment and the loss of this income had a big impact on the family’s finances. Example: While at their home station, sailors collect commuted rations, also referred to as comrats. Commuted rations are a pay allowance given to sailors to cover the cost of meals incurred off base when they are not serving on and eating aboard the ship. When a sailor goes out to sea, the commuted rations payments are stopped and sea pay is started. Also, a sailor is entitled to Career Sea special pay, or sea pay, at a monthly rate of up to $750. The actual amount of sea pay varies based on the sailor’s rank and number of years served and can range from $70 to $750 a month. However, younger sailors do not have enough time accrued on their sea pay clock to make up for the loss of commuted rations pay. Therefore, some families will actually lose money during the deployment. 1.m. Better financially – decreased expenses (N=5) Example: At some deployment locations, there is nowhere to spend the extra income. There are no bars, no daily expenses like gasoline, and no phone bills. Yet the Marines are being paid the additional entitlements and pay. Example: One participant said she thought her family’s finances were in better shape during her husband’s deployment because he was not able to spend the extra money he earned and the family was able to save more money while he was deployed. 2. Could you tell me about servicemembers you know who have gone through any financial difficulties such as declaring bankruptcy, falling behind on bills, or having a car or appliance repossessed? 2.a. Overspending/bad money management (N=13) Example: There were servicemembers who ran into severe financial problems after they returned from deployment due to overspending and overextending themselves financially while they were deployed. Example: Another participant said that he knew of a few junior enlisted servicemembers who spent all their money on expensive cars and other things, once they returned from deployment. They did not save any of the extra money they received. 2.b. Other experiences with financial difficulties (N=13) Example: One airman experienced a situation in which a creditor would not accept the automatic money transfer that was set up before the deployment. Example: One soldier’s ex-wife took him to court while he was deployed in an attempt to obtain additional child support money. Because of the additional entitlements and pay that the soldier was collecting, the court increased the payments to match. The soldier was unable to return home or communicate to prevent the action or mediate in the situation. 2.c. Defense Finance and Accounting Service errors (N=11) Example: One of the airmen had a series of late payments during a deployment because Defense Finance and Accounting Service did not process an allotment correctly and the money was not getting sent to the correct place. Example: Almost all of the airmen knew someone who did not have their pay entitlements stopped after returning from the deployment. In most instances, Defense Finance and Accounting Service was continuing to pay the entitlement for several months; unfortunately, once the problem was resolved, Defense Finance and Accounting Service took back the amount owed in one lump sum. This left the airmen with paychecks amounting to zero dollars. 2.d. Communication problems (lack of Internet/e-mail/mail/phone) (N=10) Example: A servicemember stated that a major issue with deployment was not being able to pay bills on time because the infrastructure down range (combat zone) was not immediately set up to deliver/send mail. Example: During deployments, the junior enlisted personnel do not have as much access to the Internet as the senior Marines. This can have a negative impact on their ability to access their checking and other financial accounts, thereby impacting their ability to manage their finances. 2.e. Difficulty maintaining checkbook/finances (N=10) Example: Many servicemembers have the mentality that because they earn the money it is theirs to manage. When the soldier is at home, he or she controls the finances; and when the soldier leaves, the spouse does not know how to handle the bills, finances, or budget. Example: In many situations, single sailors may not have someone back home to take care of their bills or manage their finances. 2.f. Car repossessed (N=9) Example: Some soldiers spent their money quickly after they returned from the deployment and bought expensive cars. In a few instances, these cars were repossessed because the soldiers could not make the monthly payments. Example: A soldier stated that some servicemembers’ allotments were not processed, which resulted in their cars being repossessed. This also left the servicemembers with a bad credit rating. 2.g. Did not experience financial difficulties during deployment (N=6) Example: A participant stated he knew of very few soldiers who were negatively affected financially because of deployment. Example: Those who fared well with their finances had relationships with helpful people/spouses who were able to manage their finances for the servicemembers while they were deployed. 2.h. Fell behind in bills (N=6) Example: A servicemember said that he and his spouse had fallen behind on paying their bills. Example: A soldier said that a servicemember’s phone was disconnected because his spouse went to another state to visit relatives for 2 months and the phone bill was not paid. 2.i. Bankruptcy (N=5) Example: Participants stated that they had heard of very few servicemembers who had to file for bankruptcy as a result of deployment. Example: One of the officers was aware of a sergeant who had to file bankruptcy upon returning from deployment. During the deployment, the sergeant’s spouse spent all of the extra money and took out “a ton” of additional debt. 2.j. Problems with government credit card (N=4) Example: The government travel card causes more problems than other cards. Sailors are traveling back to back with several deployments and take out back to back debts. The Travel Processing Center may not process the travel claims in 10 days like they are supposed to, so people are running up debt on the government travel card that they cannot pay off. Example: Sometimes servicemembers have had to pay (their government travel card bill) with their own money while waiting for funds to be provided/reimbursed by the government. This takes money out of their household and can affect their credit rating. It can take up to 2 months to get their money from the Defense Finance and Accounting Service. 3. During your deployment, how did servicemembers in your unit handle situations when there were financial problems at home? 3.a. Used in-theatre resources (chain of command, e-mail, Internet) (N=10) Example: Soldiers had to go through their chain of command to take care of some of their financial situations and the issues were resolved with the assistance of the chain of command. Example: Most of the other participants said they had a non- commissioned officer log them onto the Internet to check on their bills, and this helped them. 3.b. Used resources at home (family support center, family readiness officer) (N=8) Example: There are many people on base that help spouses during the deployment. The key volunteers group that meets once or twice a week is a good resource for the families to use if they need assistance during the deployment. Example: On Air Force bases, there is an abundance of assistance for servicemembers with financial problems. Information is provided through: First Term Airman Center, Personal Financial Counseling, Air Force Aid Society, Air Force Assistance Fund, First Sergeants, Finance, and the Judge Advocate General. These are some of the resources available to servicemembers for finance-related issues. 3.c. Other financial problems on homefront (N=5) Example: Sometimes a single servicemember will leave advance rent checks for the landlord of the apartment and the landlord will deposit all of the checks at once, which results in overdrafts for the servicemember. Example: There are many instances of spouses back home that spend all of the additional income that the Marine is making during the deployment. When the Marine returns, he or she will find all of their money gone and nothing to show for it. 3.d. Waited until they got home (N=5) Example: Some participants said they just waited to handle the problems until after they returned home if they do not have anyone to help them and the situation had not been brought to the command’s attention. They did not want the command involved in their finances. Example: In instances where the servicemember’s spouse spends all of the money, the member normally is not able to do anything until he or she returns from the deployment. 4. What kind of financial assistance does your service or the military need to take care of financial problems when people are deployed? 4.a. Pre-deployment briefs (more information or briefs before deployment notice received) (N=11) Example: More financial awareness training prior to the deployment would have helped alleviate many problems that individuals experienced. The current 2-minute brief is not enough. Example: Even though the base legal office offers a will and power of attorney class every Tuesday, some Marines are unable to attend. The information in the classes needs to be incorporated into the pre- deployment briefings. 4.b. Other kinds of financial assistance needed (N=9) Example: Small groups, such as married servicemembers with children or single servicemembers, should be given specific attention or focus when information on finances is distributed because the different groups have different needs when it comes to finances. Example: The First Term Airmen Center should give out warnings to new airmen about which lenders around base are good to work with and which ones are not so good. 4.c. Sustained training (provided throughout career) (N=7) Example: Financial training should occur upfront and be proactive— not be reactive, like it is now. Currently, classes are required only if the soldier has written bad checks. Example: More overall financial education is needed. One soldier was enlisted for 5 years before he got any formal financial management training, and that was only because he got in trouble. Education is the key in improving financial management. 4.d. Early training (boot camp, Advanced Individual Training) (N=6) Example: The military needs to provide more financial training in basic/boot camp to include in-depth discussions of allotments, deductions, and leave and earnings statements. One soldier said he did not know what a leave and earnings statement was until he came to his unit. Example: Financial training courses should be incorporated into basic training or technical school. By conducting this training early, DOD may have an impact on initial purchase decisions made by servicemembers. 5. What kinds of experiences have your fellow servicemembers or subordinates had with predatory lenders? 5.a.Other issues regarding experiences with predatory lenders (N=13) Example: Business representatives will tell young Marines that they can buy an item for a certain amount each month. They keep the Marine focused on the low monthly payments and not on the interest rate or the term of the loan. Example: Some Marines feel that a business would not take advantage of them because they are in the military. This leads them to be more trusting of the local businesses than they should be, which in turn, leads the businesses to take advantage of them. 5.b. Predatory lender used – car dealers (N=11) Example: Most of the participants stated that the car dealerships around the base were the worst predatory lenders because they charge high interest rates and often provide cars that are “lemons.” They said that most of the sales people at the dealerships are former military who know how to talk to servicemembers to obtain the members’ trust. The servicemember does not expect this. Example: One captain had a Marine in his unit who signed a contract with a car dealer for a loan with 26 percent interest rate. The captain took the Marine to the Marine Credit Union and got him a new loan with 9.5 percent interest rate. 5.c. Predatory lender used – payday lenders (N=10) Example: A master sergeant got caught in the check-cashing cycle. He would write a check at one payday lender in order to cover a check written at another lender during a previous week. Example: One participant told us that when he was a younger Marine he got caught up with a payday lender. The problem did not resolve itself until he deployed and was not able to go to the lender anymore. 5.d. Reason for using predatory lender – get fast cash and no hassle (N=10) Example: People use payday lenders because they are quick and easy. All the soldiers have to do is to provide their leave and earnings statement and they get the money. Example: Most of the participants say they know people who have used a payday lender, and those soldiers use them because they have bad credit and can get quick cash. 5.e. Predatory lender targeting – close proximity and clustering around bases (N=9) Example: It is almost impossible to be unaware of lenders and dealerships because many are clustered in close proximity to the installation. They also distribute flyers and use pervasive advertising in local and installation papers. Example: The stores and car lots near the installation use signs that say “E1 and up approved” or “all military approved” to get the attention of the military servicemembers. 5.f. Command role when contacted by creditors (N=8) Example: The non-commissioned officers offer to go with the junior enlisted to places like car dealers; but the young soldiers do not take them up on these offers. Example: One participant said that debt collectors do call his house and the command. He noted that one lender called him nine times in one day and his Chief Petty Officer eventually asked the lender to stop harassing his sailor. 5.g. Predatory lender targeting – advertising in installation/local newspaper (N=7) Example: Soldiers are being targeted by predatory lenders in a variety of ways; for example, flyers are left on parked cars at the barracks, advertising is done at installation functions, and words such as “military” are used on every piece of advertising to make the servicemember believe that the company is part of or supported by the military. The servicemember would normally trust lenders associated with the military. Example: Most predatory lenders have signs that say “military approved” or have commercials that say the same thing or “E1 and above approved.” 5.h. Reason for using predatory lender – urgent need (N=6) Example: Many soldiers use payday lenders because they are in a bind for money and they know these lenders can provide quick cash. Example: Soldiers will use a payday lender because they need money for a child, the kids, the house payment, etc. In many cases, it does not matter why they need it; they just need it. So, they go where they can get cash the fastest and the easiest way possible. 5.i. Predatory lender used – furniture/rent-to-own (N=6) Example: One of the participants stated that he had obtained a loan to purchase a new washer and dryer. The loan had a 55 percent interest rate and the appliances cost a lot more than they should have. Example: Rent-to-own businesses are widely used by soldiers. One soldier paid $3,000 for an $800 washer and dryer set. 5.j. No problem with predatory lenders (N=5) Example: There have not been any problems with predatory lenders lately. The state of Florida has been using legislation to shut them down. Example: The participants said that they had never encountered an officer that had to use payday lenders or predatory lenders. Most of the officers’ problems come when they have a bitter divorce. 5.k. Reason for using predatory lender – other reasons (N=5) Example: One soldier stated that his credit was so bad that he had no other option but to use high interest rate lenders. He stated that, “I have bad credit and I will always get bad credit.” Example: One participant said he has several friends that use payday lenders because they are E1s or E2s and don’t make much money. 5.l. Predatory lender targeting – employing former military members (N=4) Example: The people running and working for the predatory businesses are usually former military servicemembers. They will use their knowledge of the system to take advantage of Marines. Example: Many times the predatory lenders are veterans, former Marines, or retirees. The participant said that by using these types of people, it gives the younger Marines a false sense of trust and then the lenders will take advantage of the servicemember or “stab them in the back.” 5.m. Reason for using predatory lender – command will not know financial conditions (N=3) Example: When a soldier needs money, a payday loan can be used without notifying the chain of command. Any of the Army forms of assistance require a soldier to obtain approval from “a dozen people” before they can get any money. Example: The most significant reason that people use payday lenders is privacy. The spouses stated that if you try to obtain assistance through the Air Force, you must use the chain of command to obtain approval. By doing so, everyone in the unit will know your business. 6. What types of financial services have fellow servicemembers and/or subordinates in your unit used? 6.a. Service relief/aid societies (N=13) Example: Servicemembers are often reluctant to approach Army Emergency Relief Society because they have to complete too much paperwork. Some have concerns that their superiors will find out that they used these services and superiors may think this is a sign of weakness or failure on the part of the servicemember. Example: One soldier stated that he used the Army Emergency Relief Society because he did not have good credit and needed $1,400 as a security deposit. He said they gave him a loan and that he is paying them back at $60 per month. 6.b. Other types of services used/aware of (N=13) Example: Assistance is available for Marines with financial problems. For example, there is a Key Volunteers Network made up of enlisted and officers’ wives. Example: One of the sailors was having financial problems and did not want the command to know, so he sought help from the Federal Credit Union. The credit union was able to help with the $50,000 he had accumulated in debt. They contacted the lenders for him and told them not to contact anyone in the command about the problem. The debt was re-organized and repayment began. All of this was accomplished without the help of the Navy. 6.c. Community service center/family support center’s personal financial managers (N=13) Example: Some servicemembers who have problems have received help from Army Community Services. Army Community Services does not provide money or loans but does give some household items such as pots and pans and these items do provide some help to those in financial trouble. Example: When supervisors recognize a subordinate is having financial problems, most of them will refer the subordinate to the family support center for counseling, budget planning, and basic personal finance skills like balancing a checkbook. 6.d. DOD Financial Readiness Campaign/services’ Internet resources (N=11) Example: None of the participants had heard of the Financial Readiness Campaign. Example: Only one of the 11 participants was aware of the Financial Readiness Campaign. The servicemember that did know about it said that the information was difficult to sort through and may not be helpful to those without a basic knowledge of finances. 6.e. Servicemembers Civil Relief Act (N=9) Example: One airman said that he used the Servicemembers Civil Relief Act to reduce his total indebtedness during his deployment. In fact, after returning from the deployment, the credit companies kept the interest rates at 6 percent or less. Example: One of the participants talked about how he used the Servicemembers Civil Relief Act to get out of a lease prior to deployment. 6.f. No services used or not aware that any service was used (N=7) Example: One participant said that there are financial services available but because they are not very well advertised, many servicemembers do not know about them. Example: The spouse stated she was not aware of any available assistance programs because information about programs does not get communicated well at the installation. 6.g. Legal office (N=6) Example: There is a legal office that can review purchase contracts while the sailor is at home and a legal assistance attorney onboard ship who can provide assistance. Example: Sometimes the family at home cannot take care of financial issues, even if they have power of attorney. The best solution is to obtain help from the on base legal office. 6.h. Command financial specialists (N=5) Example: Soldiers have used the command financial specialist within their units to receive counseling, training, and information. Example: Most of the participants said that they had a command financial specialist in their unit but did not use these individuals, primarily because of a lack of trust. They said that if a servicemember talked about financial problems with these people, it would end up through the chain of command. If someone were to see a servicemember in the command financial specialist’s office, then they would know/assume the servicemember had a financial problem. 7. Is there additional assistance that could be provided to servicemembers or subordinates by the chain of command or DOD to improve the financial condition of military families? 7.a. Additional financial management training at installation and throughout career (N=13) Example: Some of the participants said the briefings provided to soldiers during base “in processing” are too quick. They normally last about 10 minutes and that is not enough time to discuss financial matters. Example: There should be financial management training points throughout a sailor’s career. For example, basic training, Advanced Individual Training, reenlistment, and then annual recurring training. 7.b. Other additional assistance (N=12) Example: A soldier stated that the offices that provide finance information are closed when the servicemembers get off work. Their hours should be longer because the soldiers’ unit will not allow them time off to go to the finance centers just to browse and acquire general financial information. Example: The military credit unions should be combined into one institution. No more Marine, Navy, or Army Federal Credit Unions, just one large credit union. This would lead to more lending power and better interest rates. 7.c. More money (N=10) Example: All military members should get pay raises. The pay increase should be significant and not just a few dollars every paycheck. People are dying every day for their country, so they should get paid well. Example: Servicemembers, particularly in the junior enlisted ranks, should be given more pay. 7.d. Improve timeliness/accuracy of Defense Finance and Accounting Service (N=7) Example: Make the finance office provide more timely reimbursement for vouchers. One soldier just got back from Iraq and said that currently, it takes the Defense Finance and Accounting Service about 6 months to pay the voucher. Example: The deployment actually messes up the servicemember’s paychecks. When starting the deployment, the addition of certain pay and allowances and the subtraction of other allowances are never done quickly and efficiently. Defense Finance and Accounting Service is always either overpaying or underpaying the Marine. When they overpay, they take the money back in one shot, not over a period of time. 7.e. Armed Forces Disciplinary Control Board/off-limits list (N=7) Example: When the Armed Forces Disciplinary Control Board does put a business on the off-limits list, the word is not put out and it is never enforced. Example: The Navy needs to blacklist places that practice predatory lending. One participant, who is a legal officer in her unit, does provide a list of places to avoid to her sailors when they check in even though she is not allowed to do this. She does not understand why the Navy is allowed to tell sailors not to go to a porn shop, but is not supposed to tell them not to go to predatory lenders. The Navy needs some type of list of businesses that have done questionable things. It does not necessarily have to be an “off-limits” list. 7.f. Care packages (N=6) Example: It is common for spouses to send care packages to soldiers during a deployment. The expense of shipping these packages is significant. In addition, they generally include items for friends of soldiers who do not have spouses or families sending items. Example: Care packages can be expensive for the family, especially when they have to send equipment that is not supplied by the military. 7.g. Improve Internet access during deployment (N=5) Example: Navy should have better Internet access on the ships. They could provide Internet access in the library. Right now the junior enlisted have to ask officers to log them on. Example: The Navy needs to increase the number of computers on ships and the access to the Internet. It is not beneficial to have Internet-based resources if no one can access the Internet during a deployment. Furthermore, when the sailors are at home station, the work computers are used for work and not for personal use. Therefore, the sailors still cannot access information on the Financial Readiness Campaign. Several resources exist to assist servicemembers with financial issues. These include military-sponsored PFM training, DOD’s Financial Readiness Campaign, individual service resources, such as command financial specialists and personal financial managers, and resources outside of DOD such as those provided through on- and off-installation banks and credit unions. All four military services require PFM training for servicemembers, and the timing and location of the training varies by service. The Army begins this training at initial military, or basic, where soldiers receive 2 hours of PFM training. Training continues at Advanced Individual Training schools, where soldiers receive an additional 2 hours of training and at the soldiers’ first duty station, where they are to receive an additional 8 hours of PFM training. In contrast, Navy personnel receive 16 hours of PFM training during Advanced Individual Training. The Marine Corps and the Air Force, on the other hand, begin training servicemembers on financial issues at their first duty stations. Events, such as deployment or a permanent change of station, can trigger additional financial management training for servicemembers. The length of this additional training and the topics covered can vary by installation and command. Also, unit leadership may refer servicemembers for financial management training or counseling if the unit command is made aware of an individual’s financial problems. For example, the Army requires refresher financial training for personnel who have abused check- cashing privileges. DOD’s Financial Readiness Campaign, which was launched in May 2003, supplements PFM programs offered by the individual services. The Under Secretary of Defense for Personnel and Readiness stated that the department initiated the campaign to improve the financial management available to servicemembers and their families and to stimulate a culture that values financial health and savings. The campaign allows installation- level providers of PFM programs to access national programs and services developed by federal agencies and non-profit organizations. The primary components of the campaign are the Web-based resources and partnerships with federal agencies and non-profit organizations. The primary tool of the Financial Readiness Campaign is a Web site designed to assist PFM program managers in developing installation-level campaigns to meet the financial management needs of their local military community. This Web site, which is also available to the public, contains important documents for the campaign as well as links to partners’ Web sites. For example, the DOD Web site contains the original memorandum announcing the start of the campaign, overall campaign objectives, as well as the names of, agreements with, and links to the campaign’s 27 partner organizations. DOD’s May 2004 assessment of the campaign noted, however, that installation-level PFM staffs have made minimal use of the campaign’s Web site. DOD campaign officials stated that it was early in implementation of campaign efforts and that they have been brainstorming ideas to repackage information given to PFM program managers, as well as servicemembers and their families. For example, officials are considering distributing financial information to servicemembers and military families at off-installation locations, as well as implementing “financial fairs” and “road shows” at military communities to increase awareness and encourage financial education. DOD has partnered with 27 organizations that have pledged to support DOD in implementing its Financial Readiness Campaign. For example, the Association of Military Banks of America is a not-for-profit association of banks that operate (1) on military installations, (2) off military installations but serving military customers, and (3) within military banking facilities designated by the U.S. Treasury. That association is supporting the Financial Readiness Campaign by encouraging member banks to provide, participate in, and assist DOD with financial training events. Another partner, the InCharge Institute of America, is producing a quarterly periodical called Military Money. The periodical is aimed at promoting financial awareness among the spouses of servicemembers. Each military service has several resources available at the installation level to assist servicemembers with financial issues. These include command financial specialists, the PFM program managers and staff, legal services, and service relief/aid societies. Command financial specialists are senior enlisted personnel (usually E6 and above) who are trained by PFM program managers to assist servicemembers at the unit level, by providing financial education and counseling. These non-commissioned officers may perform the role of the command financial specialist as a collateral duty in some units or as a full- time duty in others. The Navy, Marine Corps, and Army use command financial specialists to provide unit assistance to servicemembers in financial difficulties; the Air Force does not use command financial specialists within the unit, but has the squadron First Sergeant provide first-level counseling. Individual servicemembers who require counseling beyond the capability of the command financial specialists or First Sergeant in the Air Force can see the installation’s PFM program manager or PFM staff. The PFM program manager is a professional staff member designated and trained to organize and execute financial planning and counseling programs for the military community. PFM program managers and staff offer individual financial counseling as well as group classes on financial issues. Army, Navy, and Marine Corps regulations state that each installation should have a manager for PFM issues. The Air Force no longer designates one staff member as the PFM program manager, but it uses “work life consultants” in its family support centers to provide PFM training and counseling. The DOD’s November 2004 PFM instruction places certain requirements on staff who provide PFM training and counseling. For example, it states that the one staff member within a family support center shall be designated and trained to organize and execute financial planning and counseling programs for the military community. In addition, that staff member must receive continuing education on PFM annually and maintain professional certification. Individual installation legal offices also offer financial services to servicemembers. For example, the legal assistance attorneys may review purchase contracts for large items such as homes and cars. In addition, the legal assistance attorneys offer classes on varying financial issues including powers of attorney, wills, and divorces. Each service has a relief or aid society designed to provide financial assistance to servicemembers. The Army Emergency Relief, Navy-Marine Corps Relief Society, and the Air Force Aid Society are all private, non- profit organizations. These societies provide counseling and education as well as financial relief through grants or no-interest loans to eligible servicemembers experiencing emergencies. Emergencies include funds needed to attend the funeral of a family member, repair of a primary vehicle, or funds for food. For example, in 2003, the Navy-Marine Corps Relief Society provided $26.6 million in interest-free loans and $4.8 million in grants to servicemembers who needed the loans for emergencies. Servicemembers may utilize financial resources outside of DOD, which are available to the general public. These can include banks or credit unions for competitive rates on home or automobile loans, commercial Web sites for interest rate quotes on other consumer loans, consumer counseling for debt restructuring, and financial planners for advice on issues such as retirement planning. In addition to the individual named above, Leslie C. Bharadwaja; Alissa H. Czyz; Marion A. Gatling; Gregg J. Justice, III; David A. Mayfield; Brian D. Pegram; Terry L. Richardson; Minette D. Richardson; and Allen D. Westheimer made key contributions to this report. Military Personnel: DOD Tools for Curbing the Use and Effects of Predatory Lending Not Fully Utilized. GAO-05-349. Washington, D.C.: April 26, 2005. Credit Reporting Literacy: Consumers Understood the Basics but Could Benefit from Targeted Educational Efforts. GAO-05-223. Washington, D.C.: March 16, 2005. DOD Systems Modernization: Management of Integrated Military Human Capital Program Needs Additional Improvements. GAO-05-189. Washington, D.C.: February 11, 2005. Highlights of a GAO Forum: The Federal Government’s Role in Improving Financial Literacy. GAO-05-93SP. Washington, D.C.: November 15, 2004. Military Personnel: DOD Needs More Data Before It Can Determine if Costly Changes to the Reserve Retirement System Are Warranted. GAO-04- 1005. Washington, D.C.: September 15, 2004. Military Pay: Army Reserve Soldiers Mobilized to Active Duty Experienced Significant Pay Problems. GAO-04-911. Washington, D.C.: August 20, 2004. Military Pay: Army Reserve Soldiers Mobilized to Active Duty Experienced Significant Pay Problems. GAO-04-990T. Washington, D.C.: July 20, 2004. Military Personnel: Survivor Benefits for Servicemembers and Federal, State, and City Government Employees. GAO-04-814. Washington, D.C.: July 15, 2004. Military Personnel: DOD Has Not Implemented the High Deployment Allowance That Could Compensate Servicemembers Deployed Frequently for Short Periods. GAO-04-805. Washington, D.C.: June 25, 2004. Military Personnel: Active Duty Compensation and Its Tax Treatment. GAO-04-721R. Washington, D.C.: May 7, 2004. Military Personnel: Observations Related to Reserve Compensation, Selective Reenlistment Bonuses, and Mail Delivery to Deployed Troops. GAO-04-582T. Washington, D.C.: March 24, 2004. Military Personnel: Bankruptcy Filings among Active Duty Service Members. GAO-04-465R. Washington, D.C.: February 27, 2004. Military Pay: Army National Guard Personnel Mobilized to Active Duty Experienced Significant Pay Problems. GAO-04-413T. Washington, D.C.: January 28, 2004. Military Personnel: DOD Needs More Effective Controls to Better Assess the Progress of the Selective Reenlistment Bonus Program. GAO-04-86. Washington, D.C.: November 13, 2003. Military Pay: Army National Guard Personnel Mobilized to Active Duty Experienced Significant Pay Problems. GAO-04-89. Washington, D.C.: November 13, 2003. Military Personnel: DFAS Has Not Met All Information Technology Requirements for Its New Pay System. GAO-04-149R. Washington, D.C.: October 20, 2003. Military Personnel: DOD Needs More Data to Address Financial and Health Care Issues Affecting Reservists. GAO-03-1004. Washington, D.C.: September 10, 2003. Military Personnel: DOD Needs to Assess Certain Factors in Determining Whether Hazardous Duty Pay Is Warranted for Duty in the Polar Regions. GAO-03-554. Washington, D.C.: April 29, 2003. Military Personnel: Management and Oversight of Selective Reenlistment Bonus Program Needs Improvement. GAO-03-149. Washington, D.C.: November 25, 2002. Military Personnel: Active Duty Benefits Reflect Changing Demographics, but Opportunities Exist to Improve. GAO-02-935. Washington, D.C.: September 18, 2002.
Congress and the Department of Defense (DOD) are concerned about the financial conditions of servicemembers and their families, particularly in light of recent deployments to Iraq and Afghanistan. Serious financial issues can negatively affect unit readiness. According to DOD, servicemembers with severe financial problems risk losing security clearances, incurring administrative or criminal penalties or, in some cases, face discharge. Despite increases in compensation and DOD programs on personal financial management (PFM), studies show that servicemembers, particularly junior enlisted personnel, continue to report financial difficulties. GAO assessed (1) the extent deployment impacts the financial condition of active duty servicemembers and their families, (2) whether DOD has an oversight framework for evaluating military programs designed to assist deployed and non-deployed servicemembers in managing their finances, and (3) the extent junior enlisted servicemembers receive required PFM training. The financial conditions of deployed and non-deployed servicemembers and their families are similar, but deployed servicemembers and their families may face additional financial problems related to pay. In both a 2003 DOD-wide survey and non-generalizable focus groups that GAO conducted on 13 military installations in the United States and Germany, servicemembers who were deployed reported similar financial conditions as those who were not deployed. Some of GAO's focus group participants also noted that they--like Army Reservists in GAO's 2004 report, Military Pay: Army Reserve Soldiers Mobilized to Active Duty Experienced Significant Pay Problems--had not received their $250 family separation allowance each month during their deployment. Pay record data showed that almost 6,000 deployed servicemembers had received more than the prescribed $250 in January 2005, and 11 of them received a $3,000 catch-up, lump sum payment--the equivalent of 12 months of the allowance. This pay problem was due, in part, to service procedures being confusing and not always followed. Families who do not receive this allowance each month may experience financial strain caused by additional expenses such as extra childcare. DOD lacks an oversight framework--with results-oriented performance measures and reporting requirements--for evaluating the effectiveness of PFM programs across the services. DOD's 2002 human capital strategic plan stated that a standardized evaluation system for PFM programs is a desired goal; however, DOD does not currently have such a system. In 2003, GAO reported that DOD had included evaluative reporting measures in a draft of its PFM instruction to the services. However, the final PFM instruction issued by DOD in 2004 did not address outcome measures or contain a requirement that the services report program results to DOD because the services objected to these additional reporting requirements. Without a policy requiring evaluation and a reporting relationship between DOD and the services, DOD and Congress do not have the visibility or oversight needed to address issues related to the PFM programs. Some junior enlisted servicemembers are not receiving PFM training that is required in service regulations. While each of the services implements PFM training differently, all of the services have policies requiring that PFM training be provided to junior enlisted servicemembers. Moreover, the extent to which the PFM training is not received is unknown because most of the services do not track the completion of PFM training at the service level. Only the Army collected installation-level data and could provide a service-wide estimate of PFM training completed by junior enlisted servicemembers. Senior Army officers said PFM training had not been a priority given the need to prepare for current operations. Top-level DOD officials have repeatedly stated that financial issues directly affect servicemembers' mission readiness and should be addressed. Therefore, units whose servicemembers do not receive required PFM training risk jeopardizing their ability to meet mission requirements.
Mr. Chairman and Members of the Subcommittee: I am pleased to be here today to discuss the results of our review of the Credit Research Center (the Center) report on personal bankruptcy debtors’ ability to pay their debts and share with you our observations on the February 1998 Ernst & Young report that also examines debtors’ ability to pay. Both reports represent a useful first step in addressing a major public policy issue—whether some proportion of those debtors who file for personal bankruptcy under chapter 7 of the bankruptcy code have sufficient income, after expenses, to pay a “substantial” portion of their outstanding debts. On February 9, 1998, we reported the results of our more extensive review of the Center report and selected data to the Chairman and Ranking Minority Member of the Subcommittee on Administrative Oversight and the Courts, Senate Committee on the Judiciary. Debtors who file for personal bankruptcy usually file under chapter 7 or chapter 13 of the bankruptcy code. Generally, debtors who file under chapter 7 of the bankruptcy code seek a discharge of all their eligible dischargeable debts. Debtors who file under chapter 13 submit a repayment plan, which must be confirmed by the bankruptcy court, for paying all or a portion of their debts over a 3-year period unless for cause the court approves a period not to exceed 5 years. report concluded, however, that no one explanation is likely to capture the variety of reasons that families fail and file for bankruptcy. Nor is there agreement on (1) the number of debtors who seek relief through the bankruptcy process who have the ability to pay at least some of their debts and (2) the amount of debt such debtors could repay. One reason for the lack of agreement is that there is little reliable data on which to assess such important questions as the extent to which debtors have an ability to pay their eligible dischargeable debts; the amount and types of debts that debtors have voluntarily repaid under chapters 7 and 13; the characteristics of chapter 13 repayment plans that were and were not successfully completed; and the reasons for the variations among bankruptcy districts in such measures as the percentage of chapter 13 repayment plans that were successfully completed. Several bills have been introduced in Congress that would implement some form of “needs-based” bankruptcy. These include S.1301, H.R. 2500, and H.R. 3150. All of these bills include provisions for determining when a debtor could be required to file under chapter 13, rather than chapter 7. Currently, the debtor generally determines whether to file under chapter 7 or chapter 13. Each bill would generally establish a “needs-based” test, whose specific provisions vary among the bills, that would require a debtor to file under chapter 13 if the debtor’s net income after allowable expenses would be sufficient to pay about 20 percent of the debtor’s unsecured nonpriority debt over a 5-year period. If the debtor were determined to be unable to pay at least 20 percent of his or her unsecured nonpriority debt over 5 years, the debtor could file under chapter 7 and have his or her eligible debts discharged. Another bill, H.R. 3146, focuses largely on changes to the existing “substantial abuse” provisions under section 707(b) of the bankruptcy code as the means of identifying debtors who should be required to file under chapter 13 rather than chapter 7. The Center report and Ernst & Young reports attempted to estimate (1) how many debtors who filed for chapter 7 may have had sufficient income, after expenses, to repay at “a substantial portion” of their debts and (2) what proportion of their debts could potentially be repaid. The Center report was based on data from 3,798 personal bankruptcy petitions filed principally in May and June 1996 in 13 of the more than 180 bankruptcy court locations. The petitions included 2,441 chapter 7 and 1,357 chapter 13 petitions. On the basis of the Center report’s assumptions and the formula used to determine income available for repayment of nonpriority, nonhousing debt, the report estimated that 5 percent of the chapter 7 debtors in the 13 locations combined could, after expenses, repay all of their nonpriority, nonhousing debt over 5 years; 10 percent could repay at least 78 percent; and 25 percent could repay at least 30 percent. The Center report also estimated that about 11 percent of chapter 13 debtors and about 56 percent of chapter 7 debtors were expected to have no income available to repay nonhousing debts. Ernst & Young’s report was based on a sample of 5,722 chapter 7 petitions in four cities—Los Angeles, Chicago, Boston, and Nashville—that were filed mainly in 1992 and 1993. Ernst & Young concluded that, under the needs-based provisions of H.R. 3150, from 8 to 14 percent (average 12 percent) of the chapter 7 filers in these four cities would have been required to file under chapter 13 rather than chapter 7, and could have repaid 63 to 85 percent (average 74 percent) of their unsecured nonpriority debts over a 5 year repayment period. The report concluded that its findings corroborated the Center report’s findings that “a sizeable minority of chapter 7 debtors could make a significant contribution toward repayment of their non-housing debt over a 5-year period.” discussed our observations about the report with the Ernst & Young study author. It is important to note that the findings of both the Center report and Ernst & Young report rest on fundamental assumptions that have not been validated. Both studies share two fundamental assumptions: (1) that the information found on debtors’ initial schedules of estimated income, estimated expenses, and debts was accurate; and (2) that this information could be used to satisfactorily forecast debtors’ income and expenses for a 5-year period. These assumptions have been the subject of considerable debate, and the researchers did not test their validity. With regard to the first assumption, the accuracy of the data in bankruptcy petitioners’ initial schedules of estimated income, estimated expenses, and debts is unknown. Both reports assumed that the data in these schedules are accurate. However, both reports also stated that to the extent the data in the schedules were not accurate, the data would probably understate the income debtors have available for debt repayment. This reflected the researchers’ shared belief that debtors have an incentive in the bankruptcy process to understate income, overstate expenses, and thereby understate their net income available for debt repayment. However, there have been no studies to validate this belief. It is plausible that, to the extent there are errors in the schedules, debtors could report information that would have the effect of either overstating or understating their capacity to repay their debts, with a net unknown bias in the aggregate data reported by all debtors. One cause of such errors could be that the schedules are not easily interpreted by debtors who proceed without legal assistance. In Los Angeles, a location whose data contributed significantly to the findings of both reports, Center data showed that about one-third of debtors reported they had not used a lawyer. repayment. Neither report allowed for situations in which the debtor’s income decreases or expenses increase during the 5-year period. Past experience suggest that not all future chapter 13 debtors will successfully complete their repayment plans. To the extent this occurs, it would reduce the amount of debt that future debtors repay under required chapter 13 repayment plans. A 1994 report by the Administrative Office of the U.S. Courts found that only about 36 percent of the 953,180 chapter 13 cases terminated during a 10-year period ending September 30, 1993, had been successfully completed. The remaining 64 percent were either dismissed or converted to chapter 7 liquidation, in which all eligible debts were discharged. The reasons for this low completion rate are unknown, but this illustrates the high level of discrepancy between the amount that debtors could potentially repay, based on the data and assumptions used in the two reports, and what has occurred over a 10-year period. Another assumption made in both reports is that 100 percent of debtors’ income available for debt repayment will be used to repay debt for a 5-year period. This assumption does not reflect actual bankruptcy practice. Chapter 13 repayment plans require greater administrative oversight, and thus cost more than chapter 7 cases, including periodic review of the debtors progress in implementing the plan and review of debtors’ or creditors’ requests to alter the plan. In fiscal year 1996, for example, creditors received about 86 percent of chapter 13 debtor payments. The remaining 14 percent of chapter 13 debtor payments were used to pay administrative costs, such as statutory trustee fees and debtor attorneys’ fees. Neither study addressed the additional costs for judges and administrative support requirements that would be borne by the government should more debtors file under chapter 13. nation as a whole or of each location for the year from which the samples were drawn. Therefore, the data on which the reports were based may not reflect all bankruptcy filings nationally or in each of the 15 locations for the years from which the petitions were drawn. One difference between the two reports involves the calculation of debtor expenses. The Center’s estimates of debtor repayment capacity are based on the data reported in debtors’ initial schedules of estimated income, estimated expenses, and debts. The Center report calculated debtor expenses using the data reported on debtors’ estimated income and estimated expense schedules. The Ernst & Young report, whose purpose was to estimate the effect of implementing the provisions of H.R. 3150, adjusted debtors’ expenses using the provisions of H.R. 3150. Following these provisions, Ernst & Young used the expenses debtors reported on their schedules of estimated expenses for alimony payments, mortgage debt payments, charitable expenses, child care, and medical expenses. For all other expenses, including transportation and rent, Ernst & Young used Internal Revenue Service (IRS) standard expense allowances, based on both family size and geographic location. The impact of these adjustments on debtors’ reported expenses was not discussed in the report. However, to the extent these adjustments lowered debtors expenses, they would have increased the report’s estimates of debtors’ repayment capacity when compared to the methodology used in the Center report. To the extent the adjustments increased debtors’ reported expenses, they would have decreased the report’s estimates of debtor repayment capacity. Also, to the extent that these adjustments reduced debtors’ reported expenses, the adjustments would have corrected, at least in part, for what the report assumed was debtors’ probable overstatement of expenses on their schedules of estimated expenses. pay. Conversely, to the extent that actual family size was smaller than these averages, the report overstated allowable expenses, and thus understated the debtors’ ability to pay. A third difference between the reports involves assumptions about repayment of secured, nonhousing debt. The Center report assumed that debtors would continue payments on their mortgage debt and pay their unsecured priority debt. Unlike the Center report, the Ernst & Young report appears to have assumed that debtors will repay, over a 5-year period, all of their secured nonhousing debt and all of their unsecured priority debt. The purpose of this assumption was to estimate the amount of unsecured nonpriority debt that debtors’ could potentially repay after paying their secured nonhousing debt and unsecured priority debt. On March 10, 1998 we received an Ernst & Young report that used a national sample of chapter 7 petitions from calendar year 1997 to estimate debtors’ ability to pay. Although we have not had an opportunity to examine this report in detail, the report appears to have addressed many of the sampling issues we raised regarding the Center report and February 1998 Ernst & Young report. However, the March 1998 Ernst & Young report shares the fundamental unvalidated assumptions of the Credit Center report and the February 1998 Ernst & Young report. These assumptions include (1) the data reported on debtors’ schedules of estimated income, estimated expenses, and debts are accurate; (2) the data in these schedules can be used to satisfactorily forecast debtors’ income and expenses for a 5-year period; (3) that 100 percent of debtors’ net income after expenses, as determined in the report, will be used for debt repayment over a 5-year repayment period; and (4) that all debtors will satisfactorily complete their 5-year repayment plans. or less than the estimates in these two studies. Similarly, the amount of debt these debtors could potentially repay could also be more or less than the reports estimated. Finally, although the March 1998 Ernst & Young report is based on what is apparently a national representative sample of chapter 7 petitions, to the extent that the report is based on the same basic data (petitioners financial schedules) and assumptions as the Center report and the February 1998 Ernst & Young report, it shares the same limitations as these two earlier reports. This concludes my prepared statement, Mr. Chairman. I would be pleased to answer any questions you or other members of the Subcommittee may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed the results of its review of the Credit Research Center report on personal bankruptcy debtors' ability to pay their debts and observations on the February 1998 Ernst & Young report that also examines debtors' ability to pay. GAO noted that: (1) both studies share two fundamental assumptions that: (a) the information found in debtors' initial schedules of estimated income, estimated expenses, and debts is accurate; and (b) this information could be used to satisfactorily forecast debtors' income and expenses for a 5-year period; (2) these assumptions have been subject of considerable debate, and the researchers did not test their validity; (3) with regard to the first assumption, the accuracy of the data in bankruptcy petitioners' initial schedules of estimated income, estimated expenses, and debt is unknown; (4) however, both reports also stated that to the extent the data in the schedules were not accurate, the data would probably understate the income debtors have available for debt repayment; (5) with regard to the second assumption, there is also no empirical basis for assuming that debtors' income and expenses, as stated in their initial schedules, would remain stable for a 5-year period following the filing of their bankruptcy petitions; (6) these two assumption--debtors' income and expenses remain stable and all repayment plans would be successfully completed--could result in a somewhat optimistic estimate of debt repayment; (7) neither report allowed for situations in which the debtor's income decreases or expenses increase during the 5-year period; (8) one difference between the two reports involve the calculation of debtor expenses; (9) a second difference between the two reports involves the calculation of mortgage debt and family size; (10) a third difference between the reports involves assumptions about repayment of secured, nonhousing debt; (11) on March 10, 1998, GAO received an Ernst & Young report that used a national sample of chapter 7 petitions from calendar year 1997 to estimate debtors' ability to pay; (12) the report appears to have addressed many of the sampling issues GAO raised regarding the Center report and February 1998 Ernst & Young report; and (13) however, the March 1998 Ernst & Young report shares the fundamental unvalidated assumptions of the Credit Center report and the February 1998 Ernst & Young report.
Commercial airlines normally carry commercial insurance to cover losses caused by such things as mechanical failure, weather, and pilot error. In addition, they carry war-risk insurance to cover losses resulting from war, terrorism, or other hostile acts. Commercial war-risk insurance, however, can be canceled or restricted in the event of a major war, its geographical coverage can be restricted, and its rates can be raised without limit. Therefore, to provide the insurance necessary to enable air commerce to continue in the event of war, the Aviation Insurance Program was established in 1951. The program authorized FAA to provide war-risk insurance for those commercial aircraft operations deemed essential to the foreign policy of the United States when such insurance is not available commercially or is available only on unreasonable terms. In 1977, the Congress authorized the program to provide aviation insurance due to any risk, not just war risk, under the above conditions. To date FAA has issued only war-risk insurance. The fundamental premise underlying the program, according to FAA, is that the government should not provide insurance on a regular or routine basis; rather, the government should be the insurer of last resort. Consequently, FAA is not statutorily required to issue insurance to air carriers. Rather, FAA may issue aviation insurance only when certain conditions are met: (1) The President must determine that the continuation of specified air services, whether American or foreign flag, is necessary to carry out the foreign policy of the United States and (2) the Administrator of the FAA must find that insurance for the particular operation cannot be obtained on reasonable terms from the commercial insurance market. FAA issues two types of aviation insurance: nonpremium and premium. FAA issues nonpremium insurance for airlines performing contract services for federal agencies that have indemnification agreements with the Department of Transportation (DOT). Under the indemnification agreements, the federal agencies that contract for aircraft reimburse FAA for the insurance claims it pays to the airlines. This insurance is provided at no cost to the airlines, except for a one-time registration fee of $200 per aircraft. At present, only DOD and the State Department have such indemnification agreements with DOT. Nonpremium insurance accounts for about 99 percent of the aviation insurance issued by FAA. Since 1975, about 5,400 flights have been covered. For example, in 1990 and 1991, during Operation Desert Storm/Shield, FAA issued nonpremium insurance for over 5,000 flights of commercial airlines that provided airlift services as part of the Civil Reserve Air Fleet (CRAF).The commercial insurers had canceled war-risk coverage for those airlines that had clauses in their policies excluding CRAF activities. In addition to the CRAF program, the commercial air carriers insured under the program have flown many other important airlift missions for the United States, such as 111 flights to Tuzla, Bosnia, in 1996. For other regularly scheduled commercial or charter service, FAA issues premium insurance. With premium insurance, airlines pay premiums commensurate with the risks involved, and FAA assumes the financial liability for claims. As a condition for obtaining premium insurance, the aircraft must be operating in foreign air commerce, or between two or more points both of which are outside of the United States. In total FAA has provided this insurance for 67 flights since 1975. For example, FAA provided premium insurance in 1991 for flights operated by Tower Air to evacuate U.S. citizens from Tel Aviv. Both forms of FAA’s insurance cover loss of or damage to the aircraft (hull insurance), along with coverage for bodily injury or death, property damage, and baggage and personal effects (liability coverage). The maximum amount of hull and liability coverage that FAA provides under its policies is limited to the amounts insured by an airline’s commercial policy. The program is self-financed through the Aviation Insurance Revolving Fund (the Fund). Moneys deposited into the Fund to pay claims are generated from insurance premiums, the one-time registration fee charged for nonpremium insurance, and interest on investments in U.S. Treasury securities. From fiscal year 1959 through March 1997, the Fund accumulated approximately $65 million in revenues and paid out net claims totaling only about $151,000. Appendix I summarizes the major attributes of the program. In 1994, we reported that the Fund’s balance was insufficient to pay many potential claims and that delays in the payment of claims could cause a financial hardship for affected airlines. Since then, however, the National Defense Authorization Act for Fiscal Year 1997 (P.L. 104-201) has addressed these problems for DOD-sponsored flights. When we reported on this issue in 1994, about 20 percent of the aircraft registered for nonpremium insurance had hull values—the value of the aircraft itself—that exceeded the Fund’s balance of $56 million. According to FAA’s most currently available information, about 15 percent of the aircraft registered for nonpremium insurance have hull values that exceed the Fund’s March 31, 1997, balance of about $65.2 million. In other words, the loss of any one of those aircraft would liquidate the entire balance and leave the liability portion on any claim unpaid. FAA estimates that the average contingent liability per incident for each registered aircraft is about $350 million. Clearly, the Fund’s balance is inadequate to settle claims of this magnitude. We also reported in 1994 on a related problem with the timeliness with which the government could reimburse an airline for a major loss. Because the FAA would have needed to seek supplemental funding to pay any claims that exceeded the Fund’s balance, airline officials had expressed concern that untimely reimbursements could cause severe financial hardships and possible bankruptcy. The National Defense Authorization Act directed that the Secretary of Defense promptly indemnify the Secretary of Transportation for any loss covered by defense-related aviation insurance within 30 days. Second, the act authorized the Secretary of Defense to use any available operations and maintenance funds for that indemnification. The appropriations made to the Defense Department’s operations and maintenance accounts for fiscal year 1997 totaled approximately $91 billion. The unobligated balance remaining at the end of fiscal year 1997 is estimated to be $0.9 billion. Thus, sufficient funds appear to be available to reimburse the airlines for defense-related aviation hull losses, and there is a legislative requirement to do so in a timely manner. According to the FAA, industry, and airline officials with whom we spoke, these provisions generally resolve much of the uncertainty that they had earlier expressed about the Fund’s insufficient balance. We have two remaining concerns about the program. The first is making sure that the program has sufficient funds available to pay potential insurance claims for non-Defense-related flights in a timely manner. The second involves clarifying whether an explicit presidential determination of the foreign policy interests of the United States is needed before FAA can issue insurance. For the relatively rare flights for which FAA may extend nonpremium insurance at the request of the State Department (one flight since the program’s inception) and for the flights for which FAA provides premium insurance (67 flights since 1975), the Fund may still be undercapitalized in the event of a catastrophic loss. The insured State Department flight occurred in January 1991, when U.S. personnel were flown from Oman to Frankfurt because of the increasing unrest in Somalia. FAA also has extended premium insurance relatively infrequently. Most recently, premium insurance was issued for 37 flights to or from the Middle East between August 1990 and March 1991, which included evacuating U.S. citizens from Tel Aviv and ferrying cargo to Dhahran. While FAA has paid no claims for premium insurance flights in the history of the program, if there should be a catastrophe, the Fund may not have sufficient money to pay the claim in a timely manner. Not counting the liability associated with the loss of a flight, a claim for the loss of a single aircraft—which can cost $100 million—could liquidate the Fund’s entire balance and still leave a substantial portion of the claim unpaid for an indeterminate period of time. In 1994, FAA proposed alternative financing sources to make additional funds available for the reimbursement of major claims. Those alternatives included obtaining a permanent indefinite appropriation from the Congress and the authority to borrow funds from the U.S. Treasury to pay claims that exceed the Fund’s balance. FAA proposed using the permanent appropriations to pay claims under premium insurance, and the borrowing authority to pay claims under nonpremium insurance while awaiting a supplemental appropriation from the Congress or reimbursement from the indemnifying agency. However, the Office of Management and Budget did not approve the proposal, and the administration therefore did not forward the proposal to the Congress. Thus, the Fund remains potentially undercapitalized. FAA is proposing to raise the one-time fee that the airlines pay to register each aircraft for nonpremium insurance. FAA published a notice of proposed rulemaking in the Federal Register on April 17, 1997, that would raise the registration fee from $200 to $550; the increase is based on the changes in the consumer price index since the fee was set in 1975. However, such an increase would have a limited impact on the Fund’s balance in comparison with the potential costs resulting from a major loss of a non-DOD flight. In our 1994 report, we recommended that the program’s authorizing legislation be clarified because there were ambiguities in the legislation and in FAA’s implementing regulations about the need for FAA to obtain a presidential determination that a flight is in the foreign policy interests of the United States before issuing nonpremium insurance. No clarification in the legislation nor in the current FAA regulations have been made, and we believe that ambiguities still exist. FAA does not see this situation as a problem. FAA considers presidential approval of the indemnity agreement between DOT and other government agencies to constitute the President’s having determined that the flights covered by these agreements are in the foreign policy interests of the United States. This position is based on FAA’s Acting Chief Counsel’s 1984 review of the legislation and its accompanying legislative history. He concluded that the requirement for a presidential determination applied only to premium insurance and that the President’s signature on an interagency indemnification agreement was all that was required to issue nonpremium insurance. FAA published a proposed rulemaking in the Federal Register on April 17, 1997, that would revise its regulations to point out specifically that the presidential approval required for the issuance of nonpremium insurance is demonstrated by the standing presidential approval of the indemnification agreements with other government agencies. We disagree with FAA’s position. We believe that while FAA’s current practice has the advantage of being easier to administer, it lacks sufficient foundation in the authorizing legislation and current implementing regulations. We believe that the act, as currently written, requires that a presidential determination be made as a condition for issuing both nonpremium and premium insurance. In our 1994 report, we recommended that the Congress consider legislative changes that would address the Fund’s capitalization and the ambiguities about presidential determination. During this reauthorization process, we continue to believe that the Congress should consider providing a mechanism by which DOT can obtain access to financial resources so that it can pay claims that exceed the Fund’s balance within the normal time frames for commercial insurance for those few flights not sponsored by DOD. The source of funds could include (1) a permanent indefinite appropriation to cover the potential losses incurred during premium-insured flights and (2) the authority to borrow sufficient funds from the U.S. Treasury to pay the losses incurred during nonpremium flights made for qualifying government agencies other than DOD. DOT would repay the Treasury after it was reimbursed by the indemnifying agency. According to an analyst in the Congressional Budget Office, such changes would have no perceptible effect on the federal budget. We also continue to believe that the Congress should clarify the issue of whether or not a presidential determination is required before FAA can issue nonpremium insurance. This concludes our prepared statement. I would be happy to respond to any questions that you or members of the Subcommittee might have. Military Airlift: Observations on the Civil Reserve Air Fleet Program (GAO/NSIAD-96-125), March 29, 1996. Aviation Insurance: Federal Insurance Program Needs Improvements to Ensure Success (GAO/RCED-94-151), July 15, 1994. Military Airlift: Changes Underway to Ensure Continued Success of Civil Reserve Air Fleet (GAO/NSIAD-93-12), December 31, 1992. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO discussed the reauthorization of the Federal Aviation Administration's (FAA) Aviation Insurance Program, focusing on changes made to the program since GAO last reported on it in 1994. GAO noted that: (1) in its 1994 report, GAO found that the program did not have sufficient funds available to pay potential insurance claims in the unlikely event of a catastrophic loss; (2) progress has been made in addressing this matter; specifically, the National Defense Authorization Act for Fiscal Year 1997 made funds available to indemnify the program for losses incurred under Department of Defense (DOD)-sponsored flights, which account for the majority of flights insured; (3) while GAO's major concern has been addressed, two other concerns that it raised in the 1994 report remain unresolved; (4) gaps remain in the program's ability to pay claims for non-DOD flights; (5) although these flights account for a relatively small percentage of the flights that have been insured by the program, a single major loss could liquidate the program's available funds and leave a substantial portion of the claim unpaid; (6) FAA would need to seek supplemental funding to pay the claim, but the delay could cause financial hardship for the affected airline; and (7) GAO believes that some uncertainty about the program continues to be caused by ambiguity in the statutory language and FAA's current implementing regulations about whether the President must make a determination that a flight is in the foreign policy interests of the United States before issuing insurance.
The goal of the Army’s MCS program is to develop and field a computer system that provides automated critical battlefield assistance to maneuver commanders and their battle staff at the corps-to-battalion level. MCS is intended to enable the command staff to collect, store, process, display, and disseminate critical data to produce and communicate battle plans, orders, and enemy and friendly situational reports. It is a key component of the Army Tactical Command and Control System, which is also intended to enhance the coordination and control of combat forces through automated management of five key battlefield areas, including maneuver control. Given its role to communicate battle plans, orders, and enemy and friendly situation reports, MCS is also a key component of the Army’s ongoing efforts to digitize (automate) its battlefield operations. In 1980, the Army fielded the first MCS system—with limited command, control, and communications capabilities—to VII Corps in Europe. In 1982, the Army awarded a 5-year contract to continue MCS development, and by 1986 MCS software had evolved to version 9, also fielded in Europe. In 1987, the Army performed post-deployment tests on version 9 in Germany. The results of those tests led the Army Materiel Systems Analysis Activity to conclude that MCS did not exhibit adequate readiness for field use and recommend that further fielding not occur until the system’s problems were resolved. However, the Army awarded a second 5-year contract that resulted in version 10, which was fielded by April 1989 and remains in the field today. In November 1989, the Army Materiel Systems Analysis Activity reported that MCS had met only 30 percent of its required operational capabilities and again recommended that the system not be released for field use. In May 1990, operational testers again questioned the system’s functional ability and effectiveness because it could not produce timely, accurate, and useful information in a battle environment. While earlier versions of MCS were being fielded and withdrawn, the development of software continued. In 1988, the Army awarded a contract for the development of version 11. By February 1993, the Army stopped development of version 11 software due to multiple program slips, serious design flaws, and cost growth concerns. The program was then reorganized with a plan approved by the Office of the Secretary of Defense in April 1993. Under the reorganized program, a group of contractors and government software experts have been working to develop the next version of MCS software—version 12.01—utilizing software segments that could be salvaged from the failed version 11 effort. In addition to software, the MCS system consists of computers procured under the Army’s Common Hardware and Software (CHS) effort, which was undertaken to reverse the proliferation of program-unique computers and software. The Army planned to acquire 288 of the CHS computers in fiscal years 1997 and 1998 to support the MCS training base, and has already acquired 81. Those computers were used in a training base assessment to support a decision to acquire the remaining 207 computers. Since its reorganization in 1993, MCS program experience indicates continuing problems in the system’s development. Specifically, (1) the MCS initial operational test and evaluation of version 12.01 has slipped twice, (2) interim developmental level tests and a customer test done to support a decision to award a contract to develop follow-on software show that significant problems continue, and (3) development of follow-on version 12.1 was begun despite the results of the customer test and prior program history. After the 1993 program reorganization, version 12.01 was scheduled to undergo initial operational testing and evaluation in November 1995. The test slipped to November 1996 and is now scheduled for March 1998. Program officials stated that the test date slipped initially because the CHS computers to be used were not yet available. During August and September 1996, version 12.01 underwent a system confidence demonstration to determine whether it was ready for the November 1996 initial operational test and evaluation. Because the software was not ready, further work and two additional system confidence demonstrations followed in August and September 1996. Both demonstrations indicated that the system was not ready for operational testing. Additionally, the software still had an open priority one software deficiency and priority three and four deficiencies that would have negatively impacted the conduct of the operational test. Both the Army’s Operational Test and Evaluation Command and the Department of Defense’s (DOD) Director of Operational Test and Evaluation (DOT&E) had stated that there could be no open priority one or two software deficiencies before the operational test. They had also stated that there could not be any open priority three and four deficiencies that, in combination, were likely to have a detrimental effect on the system’s performance. DOT&E staff told us that there were a number of open priority three and four software deficiencies that they believe would have had a detrimental effect. When MCS program officials realized that these deficiencies would not be resolved in time for the initial operational test, they downgraded the test 3 weeks before it was to occur to a limited user test, utilizing $8.5 million appropriated for the MCS operational test in fiscal years 1996 and 1997. That test was conducted in November 1996. While the test report has not been finalized, a draft version states that MCS—in the tested configuration—is not operationally effective or suitable. Throughout the development of version 12.01, interim software builds have undergone numerous performance tests to determine the current state of software development, and build 4 was subjected to a customer test. The results of those tests identified continuing problems as the number of builds proceeded. For example, a December 1995 performance test report on build 3.0 stated that, if the problems found during the test were not quickly corrected in build 3.1, then the risk to the program might be unmanageable. The follow-on April 1996 performance test report of build 3.1 stated that significant problems in system stability prevented proper testing of several requirements. The report further stated that messaging between battlefield functional areas was extremely difficult and problematic and that the system had other stability problems. A September 1996 performance test report stated that of 568 previously open deficiency reports from builds 5.1 through 5.2c, 165, almost 29 percent, still remained open. This report, the last published on an MCS performance test, reflected the state of the MCS software shortly before the downgraded limited user test, in which MCS failed to demonstrate either operational effectiveness or suitability. More recent performance tests of later builds have been done; however, separate reports on those test events have not been issued. Rather, the program office plans to prepare an integrated test report in October or November 1997. “developed by a confederation of contractors who have built this current version of MCS on the salvaged ’good’ portions of the abruptly terminated development of MCS Version 11, it needs to stand the rigor of an Independent Operational Test and Evaluation . . . before a MCS Block IV contract is awarded.” To help determine the level of risk in proceeding under the Army’s development strategy, DOT&E stated in a June 1995 memorandum that an operational test of version 12.01 be conducted to measure the software’s maturity before the award of a contract for the development of follow-on versions. As a result, an operational assessment—called the MCS customer test—was conducted on version 12.01 in April 1996 to support the award of a $63.1 million contract for the development of MCS Block IV software—MCS versions 12.1, 12.2, and 12.3. No pass/fail criteria were set for the customer test. However, DOT&E directed that four operational issues be tested. Those issues related to (1) the capacity of the system to store and process required types and amounts of data, including the ability of the staff users to frequently update the information database; (2) the capabilities of the MCS network to process and distribute current and accurate data using the existing communications systems; (3) the impact of computer server outages on continuity of operations; and (4) the system administration and control capabilities to initialize the system, become fully operational, and sustain operations. In its report on the customer test, the Army’s Test and Experimentation Command stated that, at the time of the test, MCS was evolving from a prototype system to one ready for initial operational test and evaluation and, as such, possessed known limitations that were described to the system users during training. The Command reported that the test’s major limitations included (1) software that did not contain the full functional capability planned for the initial operational test and evaluation; (2) a need to reboot the system after crashes caused by the use of the computer’s alternate function key; (3) two changes in software versions during training; and (4) the fact that 65 percent of the system manager functions had not been implemented or trained. Table 1 provides more detail on the customer test results. In addition to these findings, the MCS test officer stated the following: “System performance degraded over time causing message backlogs, loss of data, and numerous system reboots. Over a period of 12 operational hours the slowed down and created message backlogs of up to 4 hours. To remain functional, the entire network of [MCS] systems must be shut down and reinitialized in proper sequence.” “The staff users had great difficulty using ... applications.” “The software pertaining to system management functions was immature, incomplete and lacked documentation. This capability is critical to the effective use and operation of the [MCS] system.” Even though the customer test did not involve pass/fail criteria, based on our review of the test report and the test officer’s comments, we believe that only the third operational issue—impact of computer server outages on continuity of operations—was met. Despite the results of the customer test and the program’s prior history, the Under Secretary of Defense for Acquisition and Technology approved the Army’s plan to award a concurrent contract for MCS Block IV software development—MCS versions 12.1, 12.2 and 12.3. In September 1996, the Army awarded a contract for the development of MCS software versions 12.1, 12.2, and 12.3 to a different contractor than the developers of MCS version 12.01. At that time, version 12.01 was still scheduled to undergo its initial operational testing in November 1996. The start of the follow-on development could have been timed to occur after version 12.01 had completed that operational testing. At most, this action would have delayed the contract award 2 months, assuming that the initial operational test had occurred in November 1996 as scheduled. However, the contract was awarded before the initial operational test, and the planned 5 month concurrency in the development of versions 12.01 and 12.1 became 18 months when the operational test slipped to March 1998. The current program schedule indicates that (1) version 12.1 is expected to undergo its operational assessment/test about 1 year after the fielding of version 12.01 is started and (2) version 12.1 fielding is to be done 5 months after initial operational capability of version 12.01 is achieved. If the scheduled version 12.01 operational test and evaluation slips again and the version 12.1 contractor is able to maintain its development schedule, version 12.1 could become available before version 12.01. By May 1997, the Army requested DOD approval of a revised acquisition program baseline that changes the planned follow-on operational test and evaluation of versions 12.1, 12.2, and 12.3 to operational assessments/operational tests. Program officials said that, although the name of the tests had changed, the planned scope of the tests had not. However, the officials said that the name change complies with guidance from DOT&E, which lists multiple levels of operational test and evaluation (from an abbreviated assessment to full operational test) and outlines a risk assessment methodology to be used to determine the level of testing to be performed. The officials further stated that the use of the generic term operational test/operational assessment permits possible changes to the level of testing for version 12.1 and follow-on software increments based on the risk assessment process. The contractors competing for the MCS Block IV (MCS versions 12.1, 12.2, and 12.3) development were given access to the government’s 12.01 code and allowed to reuse as much of it as they chose. The Block IV developer is not required to reuse any of version 12.01. Rather, the Block IV contract requires the development of software to provide specific functions. Given that (1) version 12.01 software has not passed or even undergone an initial operational test and evaluation and (2) the MCS Block IV contractor building version 12.1 is not the contractor that is building version 12.01 and is only required to develop the version 12.1 to provide specified functions, we believe that the version 12.1 development effort should not be viewed as building upon a proven baseline. Instead, it should be viewed as a new effort. The Army’s current development plan for version 12.1 and beyond, as shown in figure 1, continues an approach of building a follow-on version of software on an incomplete and unstable baseline—the uncompleted preceding version of software. Additionally, according to an official in the DOD’s Office of the Director of Test, Systems Engineering, and Evaluation, the Army’s development process allows requirements that are planned for one software version, which cannot be accomplished in that version’s development as planned, to be deferred to a later version’s development. As a result, this process makes judging program risk and total cost very difficult. The MCS program has previously demonstrated the problem of deferring requirements. For example, during MCS version 11 development, we reported that the Army had deferred seven MCS functions that were to have been developed by June 1992 and included in the software version to undergo operational testing. Even though the version 11 operational test had slipped twice, from May 1992 to September 1992 and then to May 1993, the Army continued to defer those functions, and the operational test was planned for less than the complete software package originally scheduled to be tested. In commenting on a draft of this report, DOD said that they had made progress not reflected in that draft. Specifically, they noted that there were no priority one or two, and only 22 priority three software deficiencies open as of September 11, 1997, as compared with 10 priority one, 47 priority two, and 67 priority three deficiencies open on August 16, 1996. While we agree these results indicate that some known problems have been fixed, they provide no indication of the number or severity of still unknown problems. For example, MCS version 12.01 development showed enough progress entering the November 1996 scheduled initial operational test and evaluation to reach a commitment of resources and personnel. However, that test was later downgraded to a limited user test because of software immaturity. Successful completion of an initial operational test and evaluation should provide a more definitive indication of the MCS program’s progress. Before the slip of the MCS initial operational test and evaluation from November 1996 to March 1998, the Army planned to acquire 288 computers—150 in fiscal year 1997 and 138 in fiscal year 1998—for the MCS training base. These computers were to be acquired after a full-rate production decision at a total cost of about $34.8 million—$19.1 million in fiscal year 1997 and $15.7 million in fiscal year 1998. After the initial operational test and evaluation slipped, DOD approved the Army’s acquisition of a low-rate initial production of 81 computers in fiscal year 1997 for a training base operational assessment. The purpose of the assessment, which was performed from February to May 1997, was to judge the merits of allowing the Army to procure the remaining computers prior to successful completion of the slipped operational test. On the basis of the results of that assessment, the Acting Under Secretary of Defense for Acquisition and Technology authorized the Army in July 1997 to proceed with its acquisition plans. The Acting Under Secretary noted that the DOT&E had reviewed the assessment and agreed that version 12.01 was adequate for use in the training base. The Acting Under Secretary also authorized the Army to move the training base computer funds from the MCS budget to the Army’s automated data processing equipment program budget line. This action was necessary because, according to both Army and DOD officials, it was determined that the computers to be acquired do not meet the legislated reasons in 10 U.S.C. 2400 for low-rate initial production. That legislation allows the early acquisition of systems to (1) establish an initial production base, (2) permit an orderly increase in the production rate for the system that is sufficient to lead to full-rate production upon successful completion of operational test and evaluation, and (3) provide production-representative items for operational test and evaluation. Even though the Army now plans to acquire the computers under a different budget line, the intended use of the computers remains unchanged. MCS program officials said that the computers are needed in the MCS training base before operational testing to adequately support future fielding of MCS and the larger Army Battle Command System, of which the Army Tactical Command and Control System and MCS are key components. This rationale is the same one the Acting Under Secretary cited in his July 1997 memorandum. In that memorandum, he stated that the “requirement to train Army-wide on commercial equipment is a recognized requirement not only for MCS but for a host of other digital . . . systems.” The Acting Under Secretary further noted that the funds to be moved were for equipment needed to support integrated training of multiple systems throughout the Army and concluded that “training on a digital system, even if it is not the system that is ultimately fielded, is important to the Army in order to assist in making the cultural change from current maneuver control practice to a digitized approach.” MCS program officials stated that the MCS course curriculum needs to be developed and that equipping the training base before the completion of operational testing avoids a 2-year lag between the completion of operational testing and the graduation of trained students. The officials also commented that the computers could be used elsewhere, since they would be compatible with other Army programs. The legislated requirement that major systems, such as MCS, undergo initial operational test and evaluation before full-rate production serves to limit or avoid premature acquisitions. The Army has had previous experience acquiring ineffective MCS equipment, which is indicative of the need for adequate testing before systems are fielded. In July 1990, the Army began withdrawing over $100 million of militarized MCS hardware from the field due to both hardware and software deficiencies. Additionally, the Army subsequently decided not to deploy other MCS equipment it had procured for light divisions at a cost of about $29 million because the equipment was too bulky and heavy. The MCS program’s troubled development and acquisition history has continued since the program’s 1993 reorganization. However, the Army awarded a new contract to develop future software versions and plans to procure computers without fully resolving the problems of earlier versions. This strategy does not minimize the possibility of future development problems and ensure that the Army will ultimately field a capable system. Also, since MCS software version 12.1 is being developed concurrently by a different contractor to functional specifications, it would be prudent to subject the version 12.1 software to the level of operational testing required to support a full-rate production decision, as planned for version 12.01. Accordingly, we believe a more appropriate strategy would require that future software versions be developed using only fully tested baselines, and that each version be judged against specific pre-established criteria. We recommend that you direct the Secretary of the Army to set specific required capabilities for each software version beyond version 12.01, test those versions against specific pass/fail criteria for those capabilities, and only award further development contracts once problems highlighted in that testing are resolved; perform a full operational test and evaluation of MCS software version 12.1 to ensure that it provides the full capabilities of version 12.01; and procure additional MCS computers only after an initial operational test and evaluation and a full-rate production decision have been completed. In commenting on a draft of this report, DOD agreed with our recommendation that specific required capabilities for each MCS software version beyond version 12.01 are needed, that those versions should be tested against specific pass/fail criteria for those capabilities, and that the Army should not award further development contracts until problems highlighted in prior tests are resolved. DOD noted that the Army has already set specific required capabilities for those software versions and will test those versions against specific pass/fail criteria to ensure system maturity and determine that the system remains operationally effective and suitable. DOD further stated that it will not support the award of further development contracts until the Army has successfully resolved any problems identified during the testing of related, preceding versions. DOD partially agreed with our recommendation that the Army be directed to perform a full-operational test and evaluation of MCS software version 12.1 to ensure that it provides the full capabilities of version 12.01. DOD stated that the Army will comply with DOD regulation 5000.2R and will follow guidance from Director of Operational Test and Evaluation, which lists multiple levels of operational test and evaluation (from an abbreviated assessment to full operational test) and outlines a risk assessment methodology to be used to determine the level of testing to be performed. DOD did not, however, indicate whether it would require the Army to conduct a full operational test. We continue to believe that the version 12.1 development effort should not be viewed as building upon a proven baseline. Instead, version 12.1 development should be viewed as a new effort. As a result, we still believe that the prudent action is to require that version 12.1 be subjected to the same level of operational test and evaluation as version 12.01, the level required to support a full-rate production decision. DOD agreed with our recommendation that it direct the Army to not procure more MCS computers until the completion of an initial operational test and evaluation and a full-rate production decision. It stated, however, that no further direction to the Army is needed as it had already provided direction to the Army on this issue. Specifically, the Department stated that it has directed the Army to extract the training base computers from the MCS program and to not procure or field more MCS hardware to operational units until successfully completing an initial operational test and evaluation. Our recommendation, however, is not limited to the hardware for operational units, but also encompasses the computers the Army plans to buy for the training base. Given the program’s prior history and the fact that the training base computers are not needed to satisfy any of the legislated reasons for low-rate initial production, we continue to believe that the Army should not be allowed to buy those computers until MCS has successfully completed its initial operational test and evaluation—the original plan prior to the MCS initial operational test and evaluation’s multiple schedule slips. DOD’s comments are reprinted in their entirety in appendix I, along with our evaluation. In addition to those comments, we have revised our report where appropriate to reflect the technical changes that DOD provided in a separate letter. To determine whether the current MCS software development strategy is appropriate to overcome prior problems and to determine whether the Army should procure 207 new computers for the expansion of the MCS training base, we interviewed responsible officials and analyzed pertinent documents in the following DOD offices, all in Washington, D.C.: Director of Operational Test and Evaluation; Director of Test, Systems Engineering, and Evaluation; Assistant Secretary of Defense for Command, Control, Communications, and Intelligence; Under Secretary of Defense (Comptroller); and Defense Procurement. In addition, we interviewed responsible officials and analyzed test reports from the office of the Army’s Project Manager, Operations Tactical Data Systems, Fort Monmouth, New Jersey; and the Army’s Operational Test and Evaluation Command, Alexandria, Virginia. To meet our second objective, we also interviewed responsible officials and analyzed pertinent documents from the Army’s Combined Arms Center, Fort Leavenworth, Kansas. We conducted our review from March to September 1997 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Chairman and Ranking Minority Members, Senate and House Committees on Appropriations, Senate Committee on Armed Services, and House Committee on National Security; the Director, Office of Management and Budget; and the Secretary of the Army. We will also make copies available to others on request. As you know, the head of a federal agency is required by 31 U.S.C. 720 to submit a written statement on actions taken our recommendations to the Senate Committee on Governmental Affairs and the House Committee on Government Reform and Oversight not later than 60 days after the date of this report. A written statement must also be submitted to the Senate and House Committees on Appropriations with the agency’s first request for appropriations made more than 60 days after the date of the report. Please contact me at (202) 512-4841 if you or your staff have any questions concerning this report. Major contributors to this report were Charles F. Rey, Bruce H. Thomas, and Gregory K. Harmon. The following are GAO’s comments on the Department of Defense’s (DOD) letter dated October 2, 1997. 1. In partially agreeing with this recommendation, DOD states that the Army will comply with DOD regulation 5000.2R and will follow guidance from Director of Operational Test and Evaluation—guidance which lists multiple levels of operational test and evaluation (from an abbreviated assessment to full operational test) and outlines a risk assessment methodology to be used to determine the level of testing to be performed. DOD does not, however, indicate how they agree or disagree with our recommendation or state whether they will implement the recommendation. As we stated in the body of this report, given that a different contractor is building version 12.1 under a requirement to provide specific functionality, we believe that this development effort should not be viewed as building upon a proven baseline. Instead, version 12.1 development should be considered a new effort. As a result, we continue to believe that it is prudent to require that version 12.1 be subjected to the level of operational test and evaluation required to support a full-rate production decision. 2. DOD’s direction to the Army only partially implements our recommendation. Our recommendation is not limited to the hardware for operational units, but also encompasses the computers the Army plans to buy for the training base. We continue to believe that the Army should not be allowed to buy the planned training base computers until MCS has successfully completed its initial operational test and evaluation—the original plan prior to the MCS initial operational test and evaluation’s schedule slips. The training base computers are not required to satisfy any of the three purposes the law indicates for low-rate initial production—to (1) establish an initial production base, (2) permit an orderly increase in the production rate for the system sufficient to lead to full-rate production upon successful completion of operational test and evaluation, and (3) provide production-representative items for operational test and evaluation. Since the training base computers are not needed to satisfy one of the above legislated conditions, we continue to believe that the Army should refrain from buying any additional MCS computers prior to a full-rate production decision. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO reviewed the Army's developmental and acquisition plans for the Maneuver Control System (MCS), focusing on whether: (1) the current MCS software development strategy is appropriate to overcome prior development problems; and (2) 207 new computers for MCS-related training should be procured as planned. GAO noted that: (1) since its 1993 reorganization, the MCS has continued to experience development problems; (2) the initial operational test and evaluation of version 12.01 software has slipped 28 months, from November 1995 to March 1998, and interim tests have shown that significant software problems continue; (3) despite these problems, the Army awarded a contract in September 1996 for the concurrent development of the next software versions--12.1, 12.2, and 12.3--which are being developed by a new contractor and may involve substantially different software; (4) if the Army's current development strategy for the MCS is not strengthened, development problems may continue to occur; (5) currently, the Army's strategy allows: (a) less than full operational testing of version 12.1; and (b) development of follow-on versions 12.2 and 12.3 to start about 18 months before the operational testing of each version's predecessor; (6) despite the fact that the MCS has yet to undergo an initial operational test and evaluation or be approved for production, the Army plans to acquire 207 computers in fiscal years 1997 and 1998 to increase the number of computers available for system training; (7) program officials stated that they need to acquire the computers before operational testing to provide not only MCS specific training but also training for the larger Army Battle Command System, of which the Army Tactical Command and Control System and the MCS are major components; and (8) the 207 computers, however, are not needed to satisfy any of the three legislated reasons for low-rate initial production before an initial operational test and evaluation.
In our 2009 testimony, we reported that the Forest Service, working with the Department of the Interior, had taken steps to help manage perhaps the agency’s most daunting challenge—protecting lives, private property, and federal resources from the threat of wildland fire—but that it continued to lack key strategies needed to use its wildland fire funds effectively. Over the past decade, our nation’s wildland fire problem has worsened dramatically. Since 2000, wildland fires burned more than double the acres annually, on average, than during the 1990s, and the Forest Service’s wildland fire-related appropriations have also grown substantially, averaging approximately $2.3 billion over the past 5 years, up from about $722 million in fiscal year 1999. As we have previously reported, a number of factors have contributed to worsening fire seasons and increased firefighting expenditures, including an accumulation of flammable vegetation due to past land management practices; drought and other stresses, in part related to climate change; and increased human development in or near wildlands. The Forest Service shares federal responsibility for wildland fire management with four Interior agencies— the Bureau of Indian Affairs, Bureau of Land Management, Fish and Wildlife Service, and National Park Service. In our 2009 testimony we noted four primary areas we believed the Forest Service, in conjunction with Interior, needed to address to better respond to the nation’s wildland fire problems. The agencies have taken steps to improve these areas, but work remains to be done in each. As a result, we continue to believe that these areas remain major management challenges for the Forest Service: Developing a cohesive strategy that identifies options and associated funding to reduce potentially hazardous vegetation and address wildland fire problems. In a series of reports dating to 1999, we have recommended that the Forest Service and Interior agencies develop a cohesive wildland fire strategy identifying potential long-term options for reducing fuels and responding to fires, as well as the funding requirements associated with the various options. By laying out various potential approaches, their estimated costs, and the accompanying trade-offs, we reported that such a strategy would help Congress and the agencies make informed decisions about effective and affordable long-term approaches to addressing the nation’s wildland fire problems. Congress echoed our call for a cohesive strategy in the Federal Land Assistance, Management, and Enhancement Act of 2009, which requires the agencies to produce a cohesive strategy consistent with our recommendations. In response, the agencies have prepared “Phase I” of the cohesive strategy, which, according to a Forest Service official, provides a general description of the agencies’ approach to the wildland fire problem and establishes a framework for collecting and analyzing the information needed to assess the problem and make decisions about how to address it. The Phase I document has not yet been made final or formally submitted to Congress, even though the act requires the strategy to be submitted within 1 year of the act’s 2009 passage. Once the document has been made final, according to this official, the agencies expect to begin drafting Phase II of the strategy, which will involve actual collection and analysis of data and assessment of different options. Establishing clear goals and a strategy to help contain wildland fire costs. The agencies have taken steps intended to help contain wildland fire costs, but they have not yet clearly defined their cost-containment goals or developed a strategy for achieving those goals—steps we first recommended in 2007. Without such fundamental steps, we continue to believe that the agencies cannot be assured that they are taking the most important steps first, nor can they be certain of whether or to what extent the steps they are taking will help contain costs. Agency officials identified several agency documents that they stated clearly define goals and objectives and that make up their strategy to contain costs. However, these documents lack the clarity and specificity needed by officials in the field to help manage and contain wildland fire costs. We therefore continue to believe that the agencies will be challenged in managing their cost-containment efforts and improving their ability to contain wildland fire costs. Continuing to improve processes for allocating fuel reduction funds and selecting fuel reduction projects. The Forest Service has continued to improve its processes for allocating funds to reduce fuels and select fuel reduction projects but has yet to fully implement the steps we recommended in 2007. These improvements, which we reported on in 2009 and which the agency has continued to build upon, include (1) the use of a computer model to assist in making allocation decisions, rather than relying primarily on historical funding patterns and professional judgment, and (2) taking into consideration when making allocation decisions information on wildland fire risk and the effectiveness of fuel treatments. Even with these improvements, we believe the Forest Service will continue to face challenges in more effectively using its limited fuel reduction dollars unless it takes the additional steps that we have previously recommended. The agency, for example, still lacks a measure of the effectiveness of fuel reduction treatments and therefore lacks information needed to ensure that fuel reduction funds are directed to the areas where they can best minimize risk to communities and natural and cultural resources. And while Forest Service officials told us that they, in conjunction with Interior, had begun a comprehensive effort to evaluate the effectiveness of different types of fuel treatments, including the longevity of those treatments and their effects on ecosystems and natural resources, this endeavor is likely to be a long term effort and require considerable research investment. Taking steps to improve the use of an interagency budgeting and planning tool. Since 2008, we have been concerned about the Forest Service’s and Interior’s development of a planning tool known as fire program analysis, or FPA. FPA is designed to allow the agencies to analyze potential combinations of firefighting assets, and potential strategies for reducing fuels and fighting fires, to identify the most cost- effective among them. By identifying cost-effective combinations of assets and strategies within the agencies, FPA was also designed to help the agencies develop their wildland fire budget requests and allocate resources across the country. FPA’s development continues to be characterized by delays and revisions, however, and the agencies are several years behind their initially projected timeline for using it to help develop their budget requests. The agencies collected nationwide data on available assets and strategies in fiscal years 2009 and 2010, but in neither case did the agencies have sufficient confidence in the quality of the data to use them to help develop their budget requests. FPA program officials told us that they are currently analyzing data collected early in fiscal year 2011 to determine the extent to which the data can be used to help develop the agencies’ fiscal year 2013 budget requests. The officials also told us they expect an independent external peer review of the science underlying FPA—a step we recommended in our 2008 report—to begin in May 2011. The agencies continue to take steps to improve FPA, but it is not clear how effective these steps will be in correcting the problems we have identified, and therefore we believe that the agencies will continue to face challenges in this area. Our 2009 testimony noted shortcomings in the completeness and accuracy of Forest Service data on activities and costs. Although we have not comprehensively reviewed the quality of all Forest Service data, we have encountered shortcomings during several recent reviews that reinforce our concerns. For example, during our review of appeals and litigation of Forest Service decisions related to fuel reduction projects, we sought to use the agency’s Planning, Appeals, and Litigation System, which was designed to track planning, appeals, and litigation information for all Forest Service decisions. During our review, however, we determined that the system did not contain all the information we believed was pertinent to decisions that had been appealed or litigated and that the information the system did contain was not always complete or accurate. As a result, we conducted our own survey of Forest Service field unit employees. Likewise, during our recent testimony on hardrock mining, we noted that the Forest Service had difficulty determining the number of abandoned hardrock mines on its land, and we were concerned about the accuracy of the data that the agency maintained. Further, we recently reported that the Forest Service does not track all costs associated with activities under its land exchange program—another area of concern in our 2009 testimony. One area that is expected to see improvements in the future is the completeness and accuracy of cost data, because in 2012 Agriculture is scheduled to replace its current Foundation Financial Information System with a new Financial Management Modernization Initiative system that includes managerial cost-accounting capabilities. Managerial cost accounting, rather than measuring only the cost of “inputs” such as labor and materials, integrates financial and nonfinancial data, such as the number of hours worked or number of acres treated, to measure the cost of outputs and the activities that produce them. Such an approach allows managers to routinely analyze cost information and use it in making decisions about agency operations and supports a focus on managing costs, rather than simply managing budgets. Such information is crucial for the Forest Service, as for all federal agencies, to make difficult funding decisions in this era of limited budgets and competing program priorities. According to Agriculture’s 2010 Performance and Accountability Report, the Forest Service has assessed its managerial cost accounting needs, and the cost-accounting module in the new system should allow the Forest Service to collect more-relevant managerial cost-accounting information. In 2009, we testified that the Forest Service had made sufficient progress resolving problems we identified with its financial management for us to remove the agency from our high-risk list in 2005 but that concerns about financial accountability remained. While we have not reexamined these issues in detail since that time, recent reports from Agriculture, including from the Office of the Inspector General, continue to identify concerns in this area. For example, in 2010 Agriculture’s Office of Inspector General reported six significant deficiencies—including poor coordination of efforts to address financial reporting requirements and weaknesses in internal controls for revenue-related transactions—although it did not find any of the deficiencies to be material weaknesses. Echoing these concerns about internal control weaknesses, Agriculture reported in its 2010 Performance and Accountability Report that the Forest Service needed to improve controls over its expenditures for wildland fire management and identified the wildland fire suppression program as susceptible to significant improper payments. The Forest Service likewise has not fully resolved the performance accountability concerns that we raised in our 2009 testimony. As we noted at that time, the agency’s long-standing performance accountability problems included an inability to link planning, budgeting, and results reporting. This concern was also raised by a 2010 Inspector General report, which stated that the major goals cited in the agency’s strategic plan did not match the categories in its Foundation Financial Information System. In other words, the Forest Service could not meaningfully compare its cost information with its performance measures. In addition to the management challenges we discussed in our 2009 testimony, several of our recent reviews have identified additional challenges facing the Forest Service—challenges that highlight the need for more effective program oversight and better strategic planning. In light of potential funding constraints resulting from our nation’s long-term fiscal condition, it is essential that the Forest Service be able to maximize the impact of its limited budget resources by exercising effective program oversight and appropriate strategic planning. Some recent concerns we have noted in this area include the following: Oversight of the land exchange process. As part of its land management responsibilities, the Forest Service acquires and disposes of lands through land exchanges—trading federal lands for lands owned by willing private entities, individuals, or state or local governments. In the past, we and others identified problems in the Forest Service’s land exchange program and made recommendations to correct them. However, in our 2009 report on the Forest Service’s land exchange program, we found that, although the agency had taken action to address most of the problems we had previously identified, it needed to take additional action to better oversee and manage the land exchange process so as to ensure that land exchanges serve the public interest and return fair value to taxpayers. In that report we made recommendations for the agency to, among other things, strengthen its oversight of the land exchange process, develop a national land tenure strategy, track costs, make certain training mandatory, and develop a formal system to track staff training. The Forest Service generally agreed with our recommendations, but as of October 2010, the agency had yet to develop a national land tenure strategy, track land exchange costs, require specific training for staff working on land exchanges, or fully implement a system to track attendance at training. Workforce planning. In recent reports, we and Agriculture’s Inspector General have raised concerns about the Forest Service’s ability to maintain an effective workforce through strategic workforce planning. In a 2010 report, we noted that the Forest Service (like Interior and the Environmental Protection Agency) had fallen short with respect to two of the six leading principles that we and others have identified as important to effective workforce planning: (1) aligning the agency’s workforce plan with its strategic plan and (2) monitoring and evaluating its workforce- planning efforts. Without more clearly aligning its workforce plans with its strategic plan, and monitoring and evaluating its progress in workforce planning, as we recommended in that report, the Forest Service remains at risk of not having the appropriately skilled workforce it needs to effectively achieve its mission. In addition, we reported that the Forest Service developed and issued annual workforce plans containing information on emerging workforce issues and that the agency had identified recommendations to address these issues but did not communicate its recommendations, nor assign responsibility for implementing recommendations. For the Forest Service to further capitalize on its existing workforce-planning efforts, we recommended that the agency communicate its recommendations in its annual 5-year workforce plan, assign responsibility and establish time frames for implementing the recommendations, and track implementation progress. As of November 2010, the Forest Service had begun several actions to address our recommendations, although they had not yet been fully implemented. Workforce planning is of particular concern in the area of wildland firefighting. In March 2010, Agriculture’s Inspector General reported that the Forest Service lacked a workforce plan specific to firefighters, despite the relatively high number of staff eligible to retire among those in positions critical to firefighting and the agency’s own expectations of an increase in the size and number of fires it will be responsible for suppressing. As the Inspector General noted, a lack of qualified firefighters due to retirements and inadequate planning could jeopardize the Forest Service’s ability to accomplish its wildland fire suppression mission, resulting in the loss of more property and natural resources and increased safety risks to fire suppression personnel. Strategic approaches for protecting and securing federal lands. In 2010, we issued reports examining different aspects of the Forest Service’s response to illegal activities occurring on the lands it manages, including human and drug smuggling into the United States. For example, we reported that the Forest Service, like other federal land management agencies, lacks a risk-based approach to managing its law enforcement resources and concluded that without a more systematic method to assess risks posed by illegal activities, the Forest Service could not be assured that it was allocating scarce resources effectively. For federal lands along the United States border, we reported that communication and coordination between Border Patrol and federal land management agencies, including the Forest Service, had not been effective in certain areas, including the sharing of intelligence and threat information, deployment plans, and radio communications between the agencies. In light of these shortcomings, and to better protect resources and the public, we recommended that the Forest Service adopt a risk-based approach to better manage its law enforcement resources and, in conjunction with the Department of the Interior and the Department of Homeland Security, take steps to improve communication and coordination between the agencies. The Forest Service concurred with our recommendations. Management strategies for the use of off-highway vehicles (OHV). Over the past few decades, the use of OHVs on federal lands has become a popular form of recreation, although questions have been raised about the effects of OHV use on natural resources and on other visitors. In 2009, we reported that the Forest Service’s plans for OHV management lacked key elements of strategic planning, such as results-oriented goals, strategies to achieve the goals, time frames for implementing strategies, and performance measures to monitor incremental progress. We recommended that the Forest Service take a number of steps to provide quality OHV recreational opportunities while protecting natural and cultural resources on federal lands, including identifying additional strategies to improve OHV management, time frames for carrying out the strategies, and performance measures for monitoring progress. As of June 2010, the Forest Service had several actions under way to address our recommendations, but none were yet complete. Mr. Chairman, this concludes my prepared statement. I would be pleased to answer any questions that you or other Members of the Subcommittee may have at this time. For further information about this testimony, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Key contributors to this testimony include Steve Gaty, Assistant Director; Andrea Wamstad Brown; Ellen W. Chu; Jonathan Dent; Griffin Glatt-Dowd; and Richard P. Johnson. Federal Lands: Adopting a Formal, Risk-Based Approach Could Help Land Management Agencies Better Manage Their Law Enforcement Resources. GAO-11-144. Washington, D.C.: December 17, 2010. Border Security: Additional Actions Needed to Better Ensure a Coordinated Federal Response to Illegal Activity on Federal Lands. GAO-11-177. Washington, D.C.: November 18, 2010. Workforce Planning: Interior, EPA, and the Forest Service Should Strengthen Linkages to Their Strategic Plans and Improve Evaluation. GAO-10-413. Washington, D.C.: March 31, 2010. Forest Service: Information on Appeals, Objections, and Litigation Involving Fuel Reduction Activities, Fiscal Years 2006 through 2008. GAO-10-337. Washington, D.C: March 4, 2010. Wildland Fire Management: Federal Agencies Have Taken Important Steps Forward, but Additional, Strategic Action Is Needed to Capitalize on Those Steps. GAO-09-877. Washington, D.C.: September 9, 2009. Hardrock Mining: Information on State Royalties and the Number of Abandoned Mine Sites and Hazards. GAO-09-854T. Washington, D.C: July 14, 2009. Federal Lands: Enhanced Planning Could Assist Agencies in Managing Increased Use of Off-Highway Vehicles. GAO-09-509. Washington, D.C.: June 30, 2009. Federal Land Management: BLM and the Forest Service Have Improved Oversight of the Land Exchange Process, but Additional Actions Are Needed. GAO-09-611. Washington, D.C: June 12, 2009. Forest Service: Emerging Issues Highlight the Need to Address Persistent Management Challenges. GAO-09-443T. Washington, D.C: March 11, 2009. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Forest Service, within the Department of Agriculture, manages over 190 million acres of national forest and grasslands. The agency is responsible for managing its lands for various purposes--including recreation, grazing, timber harvesting, and others--while ensuring that such activities do not impair the lands' long-term productivity. Numerous GAO reports examining different aspects of Forest Service programs--including a testimony before this Subcommittee in 2009--have identified persistent management challenges facing the agency. In light of the federal deficit and long-term fiscal challenges facing the nation, the Forest Service cannot ensure that it is spending its limited budget effectively and efficiently without addressing these challenges. This testimony highlights some of the management challenges facing the Forest Service today and is based on recent reports GAO has issued on a variety of the agency's activities. In 2009, GAO highlighted management challenges that the Forest Service faced in three key areas--wildland fire management, data on program activities and costs, and financial and performance accountability. The Forest Service has made some improvements, but challenges persist in each of these three areas. In addition, recent GAO reports have identified additional challenges related to program oversight and strategic planning. Strategies are still needed to ensure effective use of wildland fire management funds. In numerous previous reports, GAO has highlighted the challenges the Forest Service faces in protecting the nation against the threat of wildland fire. The agency continues to take steps to improve its approach, but it has yet to take several key steps--including developing a cohesive wildland fire strategy that identifies potential long-term options for reducing hazardous fuels and responding to fires--that, if completed, would substantially strengthen wildland fire management. Incomplete data on program activities remain a concern. In 2009, GAO concluded that long-standing data problems plagued the Forest Service, hampering its ability to manage its programs and account for its costs. While GAO has not comprehensively reviewed the quality of all Forest Service data, shortcomings identified during several recent reviews reinforce these concerns. For example, GAO recently identified data gaps in the agency's system for tracking appeals and litigation of Forest Service projects and in the number of abandoned hardrock mines on its lands. Even with improvements, financial and performance accountability shortcomings persist. Although its financial accountability has improved, the Forest Service continues to struggle to implement adequate internal controls over its funds and to demonstrate how its expenditures relate to the goals in the agency's strategic plan. For example, in 2010 Agriculture reported that the agency needed to improve controls over its expenditures for wildland fire management and identified the wildland fire suppression program as susceptible to significant improper payments. Additional challenges related to program oversight and strategic planning have been identified. Several recent GAO reviews have identified additional challenges facing the Forest Service, which the agency must address if it is to effectively and efficiently fulfill its mission. Specifically, the agency has yet to develop a national land tenure strategy that would protect the public's interest in land exchanges and return fair value to taxpayers from such exchanges. In addition, it has yet to take recommended steps to align its workforce planning with its strategic plan, which may compromise its ability to carry out its mission; for example, it has not adequately planned for the likely retirement of firefighters, which may reduce the agency's ability to protect the safety of both people and property. Finally, the Forest Service needs a more systematic, risk-based approach to allocate its law-enforcement resources. Without such an approach it cannot be assured that it is deploying its resources effectively against illegal activities on the lands it manages. GAO has made a number of recommendations intended to improve the Forest Service's management of wildland fires, strengthen its collection of data, increase accountability, and improve program management. The Forest Service has taken steps to implement many of these recommendations, but additional action is needed if the agency is to make further progress in rectifying identified shortcomings.
To determine the extent to which NNSA has been able to overcome technical challenges producing tritium, we visited and interviewed officials from the Pacific Northwest National Laboratory, where the TPBARs were designed and where work continues to overcome technical problems, and WesDyne Corporation, NNSA’s contractor that fabricates the TPBARs. In addition, we reviewed TVA tritium management plans and reports. We examined amendments to TVA’s operating license for the Watts Bar plant issued by NRC that approved TVA’s irradiation of TPBARs. We also reviewed relevant NRC regulations and documents related to TVA tritium activities and interviewed officials from NRC and the Defense Nuclear Facilities Safety Board, an independent agency established in 1988 to oversee the safety of DOE’s nuclear facilities. We also visited and interviewed officials at TVA’s Watts Bar 1 nuclear power plant, where TPBARs are irradiated, and SRS, where the TPBARs are processed to extract tritium for nuclear warheads. To determine the extent to which NNSA is able to meet current and future nuclear weapons stockpile requirements for tritium, we reviewed NNSA’s tritium production plans as well as requirements documents prepared by DOD and NNSA, such as the 2010 Nuclear Posture Review. We also reviewed NNSA’s strategic plans for the Tritium Readiness Program, including program execution and implementation plans; past and planned schedules for completing TPBAR fabrication, transportation, irradiation, and extraction activities; and the program’s risk management plan. We also interviewed NNSA officials responsible for developing these plans. Finally, to assess the management of NNSA’s Tritium Readiness Program, we reviewed contracts between NNSA and WesDyne, as well as budget and expenditure data obtained from DOE’s Office of Programming, Planning, Budget, and Evaluation. In addition, we examined past expenditure projections, contracts and subcontracts for TPBAR fabrication, and NNSA’s planned and actual work schedules for conducting and completing TPBAR fabrication, transportation, irradiation, and extraction activities. We determined that the data used was sufficiently reliable for the purposes of our report. We conducted this performance audit from October 2009 to September 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Tritium is a radioactive isotope of hydrogen that exists naturally in the environment, but in amounts that are too small for practical recovery. Tritium is produced artificially when lithium-6 is bombarded with neutrons (particles within an atom that have no electrical charge) in the core of a nuclear reactor. When present in the center of a nuclear weapon at the instant of its detonation, tritium undergoes nuclear fusion, releasing enormous amounts of energy and significantly increasing the explosive power, or “yield,” of the weapon. From 1954 until 1988, the United States produced the majority of its tritium using nuclear reactors at SRS. When the last of SRS’s reactors ceased operations for safety reasons in 1988, the United States lost its capability to produce tritium for the nuclear weapons stockpile. In August 1993 we reported that significant reductions in the U.S. nuclear weapons stockpile as a result of, among other things, arms reduction treaties signed with Russia would result in sufficient supplies of tritium through 2012 without the need to produce any new tritium. We reported, however, that after that date a new source of tritium would be required for the stockpile. To re-establish the nation’s tritium production capability, NNSA’s predecessor—DOE’s Office of Defense Programs—studied two different approaches to make tritium. The first involved building an accelerator to produce tritium. This device would accelerate protons (particles within an atom that have a positive electrical charge) to nearly the speed of light. The protons would be crashed into tungsten, releasing neutrons through a process called spallation, which can be used to change helium into tritium. After extensive research and development of accelerator-based tritium production technology, DOE abandoned this approach. The second approach DOE pursued was to produce tritium using commercial nuclear power reactors. In such a reactor, components called burnable absorber rods are used to control the reactivity of the core in a nuclear reactor during power production. With the support of Sandia National Laboratories and the Idaho National Laboratory using Idaho’s Advanced Test Reactor, the Pacific Northwest National Laboratory designed a new rod—called a TPBAR—that could be substituted for standard burnable absorber rods in the reactor. As the commercial reactor produces power, the TPBARs are irradiated, controlling the nuclear reaction while simultaneously producing tritium. The tritium produced within the TPBAR is stored within the rod by a nickel-plated component known as a “getter.” (See figure 1.) In 1999 DOE entered into an interagency agreement with TVA to irradiate TPBARs in TVA’s Watts Bar and Sequoyah nuclear power reactors. DOE, and subsequently NNSA after its establishment in 2000, pays TVA an irradiation fee as well as reimburses TVA for any additional costs associated with TPBAR irradiation. The agreement anticipates that TVA would be paid approximately $1.5 billion for its costs over the agreement’s 35-year term. To allow it to irradiate TPBARs in the reactor, TVA applied to NRC for an amendment to its operating license. After completing a safety evaluation, NRC issued a license amendment in 1997 that allowed TVA to irradiate 32 TPBARs for testing purposes and, following successful testing, issued another amendment in 2002 that allowed TVA to load up to 2,304 TPBARs in the Watts Bar 1 reactor per reactor operating cycle. In 2003 the first TPBARs intended to produce tritium for the nuclear weapons stockpile were loaded into the Watts Bar 1 reactor and were removed approximately 18 months later as part of the reactor’s normal refueling cycle. To date, only the Watts Bar 1 reactor has been used to irradiate TPBARs. The first TPBARs were fabricated by the Pacific Northwest National Laboratory, which designed the rods as well as the tritium production processes associated with them. In 2000 NNSA contracted with WesDyne International—a subsidiary of Westinghouse—to fabricate TPBARs. WesDyne procures and maintains an inventory of TPBAR components and assembles TPBARs at a Westinghouse facility in Columbia, South Carolina. This facility also supplies nuclear fuel for TVA’s Watts Bar 1 reactor. The Pacific Northwest National Laboratory continues to serve as the TPBAR design agent, developing design changes as needed and supporting WesDyne’s fabrication of TPBARs. The laboratory also maintains a backup capability to produce TPBARs in the event WesDyne becomes unable or unwilling to fulfill its contract with NNSA. Once fabricated, the TPBARs are shipped to Watts Bar where they are loaded into the reactor core during a normal refueling outage. After being irradiated for approximately 18 months, the TPBARs are removed from the reactor core and, after cooling for several months, are transported to SRS. The TPBARs, which are now highly radioactive because of the time spent inside the reactor, are processed at a specialized new Tritium Extraction Facility at SRS. This facility, which began operations in 2007 at a cost of nearly $500 million, cuts the tops off the TPBARs and processes them to extract tritium. Waste from the extraction process, such as scrap pieces from cut-apart TPBARs, is permanently disposed of as low-level radioactive waste. The steps involved in NNSA’s tritium production enterprise are illustrated in figure 2. Tritium extracted from TPBARs is then loaded into specially designed reservoirs that are shipped to DOD for installation into nuclear weapons. Tritium reservoirs are periodically removed from each weapon in the stockpile as part of their routine maintenance and then shipped to SRS, where any remaining tritium that has not decayed is recovered. The reservoirs are then refilled with tritium and returned to DOD. Despite the fact that the Pacific Northwest National Laboratory has redesigned several components within the TPBARs to reduce the amount of tritium permeating into the reactor coolant at the Watts Bar 1 reactor, tritium is still leaking from the TPBARs at higher-than-expected rates. As a result, significantly fewer TPBARs than planned are being irradiated in the reactor, which has considerably reduced the amount of tritium NNSA is producing. NNSA and TVA officials told us that they are developing plans to increase the number of TPBARs being irradiated and the number of reactors participating in tritium production, as well as plans to modify the reactors to better manage tritium releases to the environment. However, to date, these plans have not been actively coordinated with NRC, which ultimately must approve any modifications to reactor operations. NNSA has been unable to solve the technical challenges it has been experiencing producing tritium. Specifically, tritium is permeating from the TPBARs at higher-than-expected rates into the water used to cool the reactor core at TVA’s Watts Bar 1 nuclear plant rather than being captured in the TPBARs as designed. Watts Bar’s operating license is based on the assumption that 2,304 TPBARs would be loaded into the reactor and that tritium would permeate from the TPBARs into the reactor coolant at an average rate of 1.0 curie of tritium per year per TPBAR. However, according to NNSA reports, tritium is permeating from the TPBARs at levels of up to 4.2 curies of tritium per year per TPBAR out of a total of 10,000 curies produced by one TPBAR. To keep the total amount of tritium released into the reactor coolant below regulatory limits, TVA has limited the number of TPBARs being irradiated in the Watts Bar 1 reactor, according to TVA officials. NNSA’s original plans called for irradiating 1,160 TPBARs per reactor fueling cycle by 2010 before ramping up to nearly 2,700 TPBARs per fueling cycle by 2013 using both the Watts Bar 1 reactor and TVA’s Sequoyah 1 reactor. However, as a result of the tritium permeation problem, TVA currently irradiates only 240 TPBARs per fueling cycle using only the Watts Bar 1 reactor. While the interagency agreement between DOE and TVA allows NNSA to use the two Sequoyah reactors to irradiate TPBARs, TVA officials told us that TVA is reluctant to allow NNSA to use these reactors because, among other things, TVA would prefer to meet tritium requirements using only a single reactor. The Pacific Northwest National Laboratory has redesigned several components within the TPBARs in an attempt to reduce the amount of tritium permeating into the reactor coolant. For example, national laboratory researchers have modified the nickel-plated “getter” in the TPBAR to better capture tritium within the rod. However, despite this redesign, no discernable improvement in TPBAR performance was made and tritium is still permeating from the TPBARs at higher-than-expected rates. NNSA, TVA, and national laboratory officials told us that the obvious design changes to address the tritium permeation problem have been made and that scientists and engineers charged with investigating the issue and identifying solutions have not been able to identify the root cause of the permeation problem. NNSA officials told us that it is unknown whether any technical breakthrough will be made to substantially correct the problem. However, scientists and engineers at the Pacific Northwest National Laboratory are continuing to conduct research to identify the root cause of the permeation problem and to determine whether a technical solution can be found. Because significantly fewer TPBARs are being irradiated than NNSA originally called for, much less tritium is being produced than NNSA planned. As a result, SRS’s Tritium Extraction Facility, which began operations in 2007 and cost nearly $500 million to build and approximately $30 million per year to operate, sits essentially idle for 9 months out of the year. During this time, equipment and systems must be routinely maintained while NNSA prepares for the 3 months the facility operates during the year. At congressional direction, NNSA investigated shutting down the Tritium Extraction Facility completely for an extended period until sufficient TPBARs had been irradiated to justify continuous operations. However, NNSA determined that shutting down the facility for an extended period would cost at least $60 million more over 10 years than continuing to maintain it for limited operations. According to NNSA officials, these additional costs consist of, among other things, costs to replace and/or recertify the operational readiness of equipment that would degrade during the time the facility was shut down. Faced with significantly lower tritium production than originally planned due to tritium permeation, NNSA and TVA have been developing plans to increase the number of TPBARs being irradiated at Watts Bar 1 during each reactor fueling cycle as well as the number of reactors irradiating TPBARs, according to NNSA and TVA officials. Planning continues to be adjusted based upon changes to tritium requirements that are still being determined. Although these plans have changed several times over the past year and are still subject to significant uncertainty, current plans call for the number of TPBARs being irradiated in the Watts Bar 1 reactor to increase from 240 per cycle to 544 per cycle for the next three fueling cycles beginning in 2011, according to NNSA officials. In addition, NNSA and TVA are developing plans to irradiate TPBARs, using TVA’s Sequoyah 1 and Sequoyah 2 reactors—as provided for in the interagency agreement between DOE and TVA—beginning in 2017 if this proves necessary to meet tritium requirements. NNSA and TVA officials also told us that they discussed the option of using the Watts Bar 2 reactor, which is currently under construction. However, this reactor will not be operational until 2012 at the earliest and is not included in the interagency agreement between DOE and TVA. Moreover, TVA likely would not attempt to irradiate TPBARs in it until its second or third fueling cycle—18 to 36 months after the reactor begins operations. Therefore, according to TVA officials, Watts Bar 2 is no longer being considered to irradiate TPBARs. NNSA and TVA are also discussing a number of modifications to the Watts Bar reactor to ensure that any tritium released from the reactor coolant into the environment stays below regulatory limits, according to NNSA and TVA officials. Specifically: NNSA and TVA officials told us that they are considering the construction of a large holding tank at the Watts Bar 1 reactor that could be used to more effectively manage the presence of tritium in the reactor coolant. A large holding tank will enable TVA to better control the timing of releases of coolant containing tritium over time to stay within NRC and EPA limits. NNSA’s initial cost estimate for the construction of a large holding tank is approximately $13 million and may increase annual operations costs by as much as $500,000. NNSA and TVA officials also told us that they considered constructing a tritium removal system at the reactors to remove excess tritium from reactor coolant water before it is released into the Tennessee River. NNSA’s initial cost estimate for the construction of a tritium removal system is approximately $50 to $60 million per reactor and would add $9 to $10 million in annual operations costs. According to NNSA officials, NNSA and TVA are continuing to monitor the development of this technology. According to NNSA and TVA officials, NNSA, with the cooperation of TVA, is assessing the environmental impacts associated with irradiating increased numbers of TPBARs with higher-than-expected rates of tritium permeation. Such an increase would have to be approved by NRC and incorporated into an amendment to the reactors’ operating licenses. TVA officials told us that reactor license amendments cost up to $5 million. In addition, NNSA officials told us that completing this environmental analysis could cost between $2 million and $5 million. NNSA and TVA officials, however, have not been actively coordinating their plans with NRC, which ultimately must approve these plans and incorporate them into operating license amendments for the TVA reactors. At the time we spoke with them, NRC officials were not aware that fewer TPBARs than planned were being irradiated in the Watts Bar 1 reactor. Subsequently, in a February 2010 letter from TVA, the NRC was officially informed of how many TPBARs were being irradiated in the reactor. With regard to plans that were discussed to irradiate TPBARs in the Watts Bar 2 reactor when it is completed, NRC officials pointed out that technical issues that usually accompany any new reactor startup may not be resolved in time for TPBARs to be irradiated by the reactor’s second fueling cycle. NRC officials were also not informed of proposals being developed to install reactor coolant holding tanks or tritium removal systems at the reactors and of potential future license amendment applications to increase the amount of tritium the reactors would be allowed to release into the environment. NRC’s approval of these modifications, such as the construction of tritium removal systems at the TVA reactors, is uncertain because, according to NRC officials, there is currently no regulatory framework for the construction and operation of tritium effluent management technologies in the United States. DOD is responsible for implementing the U.S. nuclear deterrent strategy, which includes establishing the military requirements associated with planning for the nuclear weapons stockpile. NNSA and DOD work together to produce the Nuclear Weapons Stockpile Memorandum. This memorandum outlines a proposed plan for the President to sign to guide U.S. nuclear stockpile activities. This plan specifies the size and composition of the stockpile and other information concerning adjustments to the stockpile for a projected multi-year period. While the exact requirements are classified, NNSA uses the detailed information included in the memorandum on the number of weapons to be included in the stockpile to determine the amount of tritium needed to maintain these weapons. In addition, NNSA maintains a reserve of additional tritium to meet requirements in the event of an extended delay in tritium production. Small quantities of tritium are also needed by the national laboratories and other entities for scientific research and development purposes. According to NNSA officials, NNSA is meeting current requirements through a combination of harvesting tritium obtained from dismantled nuclear warheads and producing lower-than-planned amounts of tritium through the irradiation of TPBARs in the Watts Bar 1 reactor. However, tritium in the stockpile as well as in NNSA’s tritium reserve continues to decay, making increased production of tritium critical to NNSA’s ability to continue meeting requirements. Although the number of nuclear weapons in the U.S. stockpile is decreasing, these reductions are unlikely to result in a significant decrease to tritium requirements. Specifically, the New Strategic Arms Reduction Treaty signed in April 2010, if ratified by the Senate, will reduce the number of deployed strategic nuclear warheads by 30 percent. However, it has not yet been determined whether some or all of these warheads will be maintained in reserve—where the warheads would continue to be loaded with tritium—or dismantled—where the tritium could be removed from the weapons. Moreover, even if some or all of the warheads reduced under the treaty were dismantled, tritium requirements are unlikely to decrease by a significant amount. While the specific reasons for this lack of decrease in tritium requirements are classified, NNSA officials we spoke with said that the additional tritium supply that would be available as a result of increased warhead dismantlements is unlikely to fill what they estimate will be a steady tritium demand in the future. To date, NNSA has not had to use tritium in the reserve it maintains. However, according to NNSA officials, use of some of the tritium reserve in the relatively near future may be necessary if NNSA is unable to increase tritium production beyond its current level of 240 TPBARs being irradiated in a single reactor. In addition, if NNSA takes longer than expected to increase tritium production, even reserve quantities may be insufficient to meet requirements for an extended period of time. Information on the dates when NNSA will need to begin using the tritium reserve and when the reserve will be depleted is classified. Nevertheless, NNSA officials told us that they were confident that NNSA will be able to meet tritium requirements in the future without substantially reducing the nation’s tritium reserve and are not considering alternative ways of producing tritium for the stockpile. Although NNSA has attempted to ensure a reliable long-term supply of tritium, our review found two problems with NNSA’s management of the Tritium Readiness Program. First, NNSA was unable to provide us with evidence about its adherence to the appropriate contracting procedures when purchasing components and services for the Tritium Readiness Program. Second, because of, among other things, the contract structure NNSA has entered into with suppliers of components and services for the Tritium Readiness Program, program funds are being expended much more slowly than planned. As a result, the program is accumulating large unexpended funding balances beyond thresholds established by DOE. NNSA relies largely on commercial suppliers to provide TPBARs, TPBAR components, and other services to the program through fixed price contracts. Although the Pacific Northwest National Laboratory originally designed the TPBARs and fabricated initial supplies, NNSA believed that the commercial sector was better able to meet nuclear industry quality requirements at lower cost. Therefore, in 2000, NNSA entered into a contract with WesDyne International to manufacture TPBARs. WesDyne International is a subsidiary of Westinghouse which is owned by the Japanese company Toshiba. Because of the relatively few companies capable of manufacturing TPBAR components, and to minimize the possibility of one of these companies exiting the industry or losing interest in working with the program, the contract was structured as a 44-year fixed price contract with an approximately 4-year initial phase and a 40- year second phase consisting of a 10-year base period and three 10-year options. According to NNSA officials, a 44-year fixed price contract with lengthy options was intended to assure companies that there would be sufficient work required far enough into the future to make a contractor’s initial investment in new facilities and capabilities worthwhile. Because of the highly specialized manufacturing processes involved in fabricating TPBARs, the relatively low production quantities planned by the program, and the length of time required to set up facilities for manufacturing classified components, NNSA identified the loss of one or more component suppliers as a major program risk. For example, several components can only be obtained from a single supplier, and if any of these companies were to decide it was no longer profitable to continue working with NNSA or were acquired by foreign firms, it could take NNSA several years and millions of dollars to find and develop a new supplier. While these considerations led NNSA to use a 44-year contract to procure TPBARs, NNSA did not provide us evidence that it adhered to the appropriate contracting procedures typically involved when entering into a contract of this length. Federal statutes as implemented by the Federal Acquisition Regulation are the principal set of rules that govern the process through which the federal government acquires and purchases goods and services. NNSA officials did not document the legal authority used in entering into a contract of this length. In contrast, NNSA waived application of a statutory provision prohibiting contract awards under certain circumstances to foreign-controlled entities—by permitting a foreign-owned company to produce TPBARs—and provided us with evidence of its compliance with the waiver requirements. In its comments on a draft of this report, NNSA stated that it provided documentation of a solicitation review that was conducted as well as its explanation of its legal authority to enter into contracts with periods of performance in excess of 5 years. While we agree that a review of the solicitation took place, the documentation NNSA provided contained no evidence that the long period of performance of this contract—a period of performance that NNSA agreed in its comments was unusually long—was considered as part of this solicitation review. NNSA asserts that it followed the appropriate procedures when approving a contract of this length. However, the procedures NNSA cited in its comments were not implemented until about 10 years after the contract with WesDyne was initially awarded. Moreover, while NNSA claimed that it had the legal authority to enter into a contract of this length, none of the documentation NNSA provided to us before we sent our draft report to NNSA for its comments stated the specific legal authority that was used to enter into a contract of this length. In fact, it was not until NNSA’s comments on our draft report that it provided us with its explanation of its legal authority to enter into contracts with periods of performance in excess of 5 years. NNSA is spending program funds more slowly than planned and has accumulated large amounts of unexpended funding. NNSA receives “no- year” appropriations from Congress that have no limit on how long the agency may take to obligate and expend those funds. However, to ensure large amounts of unexpended funding do not accumulate that could be better used for other purposes, DOE has established thresholds of acceptable levels of unexpended funds that may be carried over from one fiscal year to the next. DOE also analyzes individual program budgets to determine a percentage of program funds which each program can reasonably be expected to carry over each year. For example, in fiscal year 2009, DOE determined that NNSA’s Tritium Readiness Program could expect to carry over 16 percent—or approximately 2 months worth—of funding, or $20.7 million. However, the program has routinely exceeded DOE’s threshold for unexpended funds. For example, it exceeded the threshold by $23.4 million at the end of fiscal year 2006, $27.6 million at the end of fiscal year 2007, $48.4 million at the end of fiscal year 2008, and $39.1 million at the end of fiscal year 2009. Officials with the Tritium Readiness Program estimate that the program will exceed DOE’s threshold by approximately $50 million by the end of fiscal year 2010. Table 1 outlines the Tritium Readiness Program’s unexpended funds. The contract structure NNSA has entered into with suppliers of components and services contributes to these high unexpended funding balances. An agency must generally obligate the full amount of a contract at the time it enters into the contract. These obligated funds are then expended over time as components and other services are delivered to NNSA by the contractor. Although NNSA’s TPBAR fabrication contract is structured as a 44-year contract with 10-year options, the program has been funding each option in 5-year increments. Under this arrangement, the program obligates sufficient funds for 5 years at the beginning of each increment, which NNSA officials told us should result in high unexpended funding balances during the first year which are gradually reduced over the following 5 years as the program pays out the funds to its contractors. NNSA also uses a number of 3-4 year subcontracts to procure TPBAR components, which also require funding at the time NNSA enters into the contract and are often awarded in different years than the main contract’s 5-year periods. Consequently, NNSA’s contracting strategy periodically results in high levels of unexpended funds as funds for different awards are obligated and expended at different times. However, the fact that fewer than expected numbers of TPBARs are being irradiated in the Watts Bar 1 reactor is also contributing to NNSA’s accumulation of large unexpended funding balances. Irradiating fewer than expected TPBARs impacts the program’s costs by lowering the total irradiation fees NNSA pays to TVA for each reactor cycle. Specifically, NNSA pays TVA an irradiation fee of $4,950 per year per TPBAR irradiated. Irradiating fewer than expected TPBARs has also lowered expenses associated with operating the Tritium Extraction Facility at SRS. In addition, funds under NNSA’s contract for TPBAR fabrication are being expended much more slowly than planned. In 2008 and 2009, the program planned to order 812 TPBARs from WesDyne, but due to the permeation problem at Watts Bar, the program eventually reduced that number to 240. Furthermore, NNSA’s contract with WesDyne originally planned for fabricating more than 2,500 TPBARs between 2004 and 2009, but NNSA had ordered fewer than half that many by the end of fiscal year 2009. Because fewer TPBARs are being ordered than originally planned for, the price to fabricate each TPBAR has increased over time from about $700 per TPBAR in 2000 to approximately $1,300 per TPBAR today. NNSA and WesDyne officials told us that the price per TPBAR is likely to increase further when the next contract increment is finalized later this year. While large unexpended funding balances do not necessarily indicate that the tritium program is being mismanaged, the fact that they have been increasing indicates that NNSA is requesting more funding than it needs on an annual basis—funds that could be appropriated for other purposes. From fiscal year 2006 to fiscal year 2008, NNSA’s unexpended balances in the Tritium Readiness Program exceeding DOE’s threshold more than doubled from $23.4 million to $48.4 million, and as a result, Congress reduced the program’s funding by $10.4 million for fiscal year 2009. Although the program’s unexpended funds were lower at the end of fiscal year 2009, this was largely due to $8.7 million which was deobligated at the end of the year because of an ongoing subcontract proposal audit. These funds were returned to the program in fiscal year 2010, and had they not been deobligated, the program’s unexpended balances would have remained approximately the same from fiscal year 2008 to fiscal year 2009, even with the congressional reduction in funding. Finally, by the end of the second quarter of fiscal year 2010, NNSA had spent less than half the funds it had originally planned to spend by that time, and NNSA officials stated that the program will likely end fiscal year 2010 with even higher levels of unexpended funds. Thus, while NNSA’s contracting approach does contribute to its high unexpended funds, the fact that these unexpended funds are increasing each year indicates that the program is receiving more funding than it is able to execute due to the reduced scope of work caused by the tritium permeation problem. NNSA’s inability to overcome the technical challenges and meet its original tritium production goals has raised serious questions about the agency’s ability to provide a reliable source of tritium to maintain the nation’s nuclear weapons stockpile in the future. While NNSA has taken steps to attempt to solve the tritium permeation problem, it is unlikely that anything less than a complete redesign of the TPBARs will solve the problem. Unfortunately, existing supplies of tritium in the stockpile and the tritium reserve are unlikely to fulfill requirements for the time a complete redesign would take. It is also not clear that a redesign would solve the problem since NNSA does not fully understand the reasons behind tritium permeation. Therefore, NNSA and TVA are working together to not only increase the number of TPBARs being irradiated in the Watts Bar 1 reactor but also to increase the number of reactors being used for the program. Increasing the number of TPBARs irradiated will also require substantial and costly modifications to TVA facilities to ensure that tritium emissions comply with applicable nuclear safety and environmental regulations. Because such modifications to the operation of TVA’s reactors must be approved by NRC, it is important that NNSA and TVA coordinate their efforts closely with the regulatory agency. In addition, it is critical that DOD—the ultimate customer of NNSA’s tritium production program—is also kept informed of the challenges facing the program and the impact of these challenges on current and future availability of tritium for the nuclear weapons stockpile. NNSA’s Tritium Readiness Program has taken a number of steps to ensure the long-term availability of critical components needed for tritium production. We are concerned, however, that NNSA was unable to provide evidence that it adhered to the appropriate contracting procedures when purchasing components and services for the Tritium Readiness Program. In addition, the contract structure NNSA has put in place for the program in conjunction with lower than expected rates of tritium production has led the program to accumulate large amounts of unexpended funding. These large balances make it difficult for NNSA management and Congress to accurately determine the amount of funding the program actually requires, what the program is accomplishing with the appropriated funding, and how much could potentially be appropriated for other priorities. To increase confidence in the nation’s continued ability to produce a reliable supply of tritium in the future and to improve the management of NNSA’s Tritium Readiness Program, we recommend that the Secretary of Energy direct the Administrator of NNSA to take the following four actions: In cooperation with TVA and NRC, develop a comprehensive plan to manage releases of tritium from TVA’s Watts Bar 1 and any other reactors chosen to irradiate TPBARs in the future. Conduct a comprehensive analysis of alternatives to the current tritium production strategy in the event that NNSA continues to be unable to meet its tritium production goals. This alternatives analysis should be coordinated closely with DOD and take into account current and future nuclear weapons stockpile requirements for tritium. Complete an acquisition strategy that reflects the outcome of the analysis of alternatives and aligns the contracting structure to that plan and, if necessary, ensures adherence to the appropriate contracting procedures for long-duration contracts. Ensure NNSA’s future budget requests account for the large unexpended balances in the Tritium Readiness Program and better reflect the amount of funding the program is able to spend annually. We provided NNSA, TVA, and NRC with a draft of this report for their review and comment. In its comments, NNSA generally agreed with the facts in the report and the recommendations. However, NNSA noted that, in its view, it has a high probability of meeting its tritium mission requirements without risk of using reserve inventories. In response to the draft report’s discussion of the Tritium Readiness Program’s TPBAR manufacturing contract with WesDyne, NNSA commented that the program’s unique contracting structure enables the program to leverage and maintain a commercial supply chain over a period of more than 40 years while providing some assurances of cost controls for the life of the contracts. Finally, NNSA noted that it provides responsible financial stewardship of government resources by adjusting future budget requests for changes in the Tritium Readiness Program planning requirements and risks. With regard to meeting tritium requirements, NNSA commented that irradiating 544 TPBARs in the Watts Bar 1 reactor per reactor fueling cycle until fiscal year 2016 will provide proof of NNSA’s ability to meet near term requirements without using reserves. Our draft report discussed NNSA’s plans to increase the number of TPBARs being irradiated in the Watts Bar 1 reactor from 240 per fueling cycle to 544 per fueling cycle. However, it is important to note that NNSA’s plans have changed several times and are still subject to considerable uncertainty. In particular, NNSA’s original plans called for irradiating 1,160 TPBARs per fueling cycle by 2010 before ramping up to nearly 2,700 TPBARs per fueling cycle using both the Watts Bar 1 reactor and the Sequoyah 1 reactor. While we are encouraged that NNSA and TVA are working together to increase the number of TPBARs being irradiated, continued uncertainty about NNSA’s and TVA’s ability to irradiate additional TPBARs in a single reactor while not exceeding limits on the amount of tritium released into the environment raises doubts about the program’s ability to provide a reliable supply and predictable quantities of tritium over time. Regarding its TPBAR manufacturing contract with WesDyne, NNSA stated that it provided documentation of a solicitation review that was conducted as well as its explanation of its legal authority to enter into contracts with periods of performance in excess of 5 years. We modified our draft report to clarify that, although we agree that a review of the solicitation took place, the documentation of the review that NNSA provided to us contained no evidence that the long period of performance in this contract—a period of performance that NNSA agreed in its comments was unusually long—was considered as part of this solicitation review. Although NNSA asserts that it followed the appropriate procedures when approving a contract of this length, the procedures NNSA cited in its comments were not implemented until about 10 years after the contract with WesDyne was initially awarded. Finally, with regard to NNSA’s management of the Tritium Readiness Program’s finances, NNSA commented that it monitors its unexpended funding and meets quarterly with DOE to discuss and justify its unexpended balances. NNSA also stated that adjustments to its budget requests and refinements to its acquisition strategy will continue as part of its efforts to accommodate changes to the nuclear weapons stockpile. We are encouraged by NNSA’s pledge to adjust its budget requests in response to changes in program needs and by other actions NNSA is taking to reduce its unexpended funding balances. However, as our draft report notes, unexpended funding balances in excess of DOE’s threshold for unexpended funds increased every year since fiscal year 2006 with the exception of fiscal year 2009 and NNSA estimates the program will exceed DOE’s threshold by approximately $50 million by the end of fiscal year 2010. In our view, these increases in unexpended funding call into question the effectiveness of NNSA’s monitoring of the program’s financial management. NNSA also provided technical comments that we incorporated as appropriate. NNSA’s comments are presented in appendix I. TVA commented that it shared our perspectives regarding the importance of NNSA’s ability to assure that the nuclear weapons stockpile requirements for tritium will be met in the future. TVA noted that it has been and continues to be dedicated to working with NNSA in evaluating and deciding among alternative approaches to help better assure that future tritium production will be a the necessary levels. TVA also provided technical comments that we incorporated as appropriate. TVA’s comments are presented in appendix II. In its comments, NRC agreed with our findings, conclusions, and recommendations. NRC also provided technical comments that we incorporated as appropriate. NRC’s comments are presented in appendix III. We are sending copies of this report to the appropriate congressional committees, Secretary of Energy, Administrator of NNSA, Chairman of NRC, President and Chief Executive Officer of TVA, Director of the Office of Management and Budget, and other interested parties. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. In addition to the individual named above, Ryan T. Coles, Assistant Director; Allison Bawden; Will Horton; Jonathan Kucskar; Alison O’Neill; Tim Persons; Peter Ruedel; Ron Schwenn; and Rebecca Shea made key contributions to this report.
The National Nuclear Security Administration's (NNSA) Tritium Readiness Program aims to establish an assured domestic source of tritium--a key isotope used in nuclear weapons--in order to maintain the U.S. nuclear weapons stockpile. Because tritium decays at a rate of 5.5 percent annually, it must be periodically replenished in the stockpile. However, since 2003, NNSA's efforts to produce tritium have been hampered by technical challenges. In this context, GAO was asked to (1) determine the extent to which NNSA has been able to overcome technical challenges producing tritium, (2) determine the extent to which NNSA is able to meet current and future nuclear weapons stockpile requirements for tritium, and (3) assess the management of NNSA's Tritium Readiness Program. To do this, GAO visited facilities involved in tritium production and reviewed tritium requirements established by NNSA and the Department of Defense, among other things. NNSA has been unable to overcome the technical challenges it has experienced producing tritium. To produce tritium, stainless steel rods containing lithium aluminate and zirconium --called tritium-producing burnable absorber rods (TPBAR)--are irradiated in the Tennessee Valley Authority's (TVA) Watts Bar 1 commercial nuclear power reactor. Despite redesigns of several components within the TPBARS, tritium is still leaking--or "permeating"--out of the TPBARs into the reactor's coolant water at higher-than-expected rates. Because the quantities of tritium in the reactor coolant are approaching regulatory limits, TVA has been significantly restricting the number of TPBARs that it will allow NNSA to irradiate in each 18-month reactor fueling cycle, and, consequently, NNSA has not been producing as much tritium as it planned. NNSA and TVA officials are continuing to develop plans to increase the number of TPBARs that will be irradiated, as well as, if necessary, the number of reactors participating in the program. However, these plans have not been coordinated with the Nuclear Regulatory Commission (NRC), which ultimately must approve any changes to the operation of the TVA reactors. NNSA currently meets the nuclear weapons stockpile requirements for tritium, but its ability to do so in the future is in doubt. NNSA officials told us that they will be able to meet future requirements through a combination of harvesting tritium obtained from dismantled nuclear warheads and irradiating TPBARs. Although the number of nuclear weapons in the U.S. stockpile is decreasing, these reductions are unlikely to result in a significant decrease of tritium requirements and will not eliminate the need for a reliable source of new tritium because of the need to periodically replenish it in the remaining nuclear weapons stockpile due to tritium's decay. While NNSA has not, to date, been required to use tritium from a reserve that it maintains, use of this reserve in the relatively near future may be necessary if NNSA is unable to increase tritium production beyond its current level. Although NNSA has attempted to ensure a reliable long-term supply of tritium, GAO's review found two problems with NNSA's management of the Tritium Readiness Program. First, NNSA could not provide us with evidence that it adhered to the appropriate contracting procedures when purchasing components and services for the program. Second, due to, among other things, the way the program's contracts with its suppliers are structured, the program is spending its funds more slowly than planned and is accumulating large unexpended balances. The program is subject to thresholds established by the Department of Energy of acceptable levels of unexpended funds that may be carried over from one fiscal year to the next. However, the program exceeded these thresholds by more than $48 million in 2008 and by more than $39 million in 2009. While large unexpended balances are not necessarily an indication that the program is being mismanaged, it does indicate that the program is requesting more funding than it needs on an annual basis--funds that could be appropriated for other purposes. GAO recommends that NNSA develop a plan to manage tritium releases from reactors, analyze alternatives to its current tritium production strategy, ensure its contracting complies with appropriate contracting procedures, and ensure its future budget requests account for the program's large unexpended balances. NNSA generally agreed with our recommendations.
For over 30 years, the United States has relied on an all volunteer force to defend the nation at home and abroad. Before that, the nation relied on the draft to ensure that it had enough soldiers, sailors, Marines, and airmen in wartime. Since the September 11, 2001, terrorist attacks on the United States, DOD has launched three major operations requiring significant numbers of military servicemembers: Operation Noble Eagle, which covers military operations related to homeland security; Operation Enduring Freedom, which includes ongoing military operations in Afghanistan and certain other countries; and Operation Iraqi Freedom, which includes ongoing military operations in Iraq and the Persian Gulf area. These operations have greatly increased overseas deployments. Moreover, they are the first long-term major overseas combat missions since the advent of the all volunteer force in 1973. To ensure that sufficient forces are available for the services to accomplish their missions, Congress authorizes an annual year-end authorized personnel level for each service component. To function effectively, the services must, among other things, access and retain officers at appropriate ranks and in the occupational specialties needed to enable its units to contribute to the services’ missions. The services rely on monetary and nonmonetary incentives, where needed, to meet their accession and retention needs. The careers of military officers are governed primarily by Title 10, which has incorporated the DOPMA legislation, giving the services the primary authority to recruit, train, and retain officers. Title 10 specifies the active duty and reserve service obligations for officers who join the military: graduates of the service academies must serve a minimum of 5 years on active duty; and up to an additional 3 years on active duty or in the reserves; ROTC scholarship recipients must serve a minimum of 4 years on active duty and an additional 4 years on active duty or in the reserves; and other types of officers have varying service obligations (for example, pilots must serve 6 to 8 years on active duty, depending on the type of aircraft, and navigators and flight officers must serve 6 years on active duty). Similarly, Title 10 authorizes the services to directly commission medical specialists and other professionals to meet their needs. The services generally met most their past needs for newly commissioned officers; but the Army faces some unique problems accessing enough officers to meet its needs and has not developed a strategic plan to address these challenges. The Marine Corps, Navy, and the Air Force generally met their needs for accessing newly commissioned officers in FYs 2001, 2003, and 2005. However, all services experienced problems recruiting enough medical professionals in FYs 2001, 2003, and 2005; and most had problems accessing racial and ethnic minorities to diversify their officer corps. Our analysis of documentary evidence confirmed the services’ reports that their accession programs generally met their officer needs in selected recent years, but each experienced some shortfalls in certain ranks and specialties. The services do not develop overall yearly goals for the total number of commissioned officers needed. Instead, they adjust the enrollment in OCS/OTS throughout the year to meet higher or lower than expected demands for newly commissioned officers by the various occupational specialty groups of importance to the service. The Army and the Marine Corps are increasing their numbers of newly commissioned officers because of their growing end strengths, whereas the Navy and the Air Force are accessing fewer officers because they are reducing their end strengths. The Army did not meet its overall accession needs for newly commissioned officers in FYs 2001 and 2003, though it met its needs in 2005. The Army has two distinct types of commissioned officers. Most officers are commissioned in its basic branches or specialty areas, such as infantry or signal, and are commissioned through major accession programs. The second type of officers are and those who are directly commissioned, such as medical professionals. In FY 2001, the Army needed 4,100 of these officers in its basic branches and instead it commissioned 3,791, in FY 2003 it needed 4,500 and instead commissioned 4,433. In FY 2005, it exceed it goal of commissioning 4,600 of and instead accessed 4,654 in it basic branches. During those years it was increasing the number of commissioned officers entering the service (see table 1). Specifically, the Army commissioned 5,540 officers in FY 2001, 5,929 in FY 2003, and 6,045 in FY 2005. In each of the examined fiscal years, the Army’s ROTC program accounted for around half of all newly commissioned officers, with nearly 1,000 of those officers being accessed annually into the Army despite not being awarded a scholarship. The Army increased total accessions from FY 2001 to FY 2005 by nearly doubling the number of officers commissioned through OCS. Our independent review and analysis of data and other materials from the commissioning sources found that the Army does not recruit officers to fill a specific specialty, and instead, officers are placed in general specialty areas based on the needs of the Army. Some general specialty areas are more popular than others, and the Army attempts to match an officer candidate’s preference to the needs of the Army. However, the service’s needs prevail, and some officers may be placed in specialty areas outside of their preferences if shortfalls are present. In contrast, the Marine Corps met its overall accession needs for newly commissioned officers for the examined fiscal years, while increasing the number of officers it commissioned in FY 2005 (see table 2). Increasing accessions by 241 from FY 2003 to FY 2005 represents about an 18 percent increase in the number of newly commissioned officers. Relative to the other services, the Marine Corps commissioned a larger percentage of its officers through programs other than the academy or ROTC program. For example, in FY 2005, 76 percent of the Marine Corps’s newly commissioned officers came from OCS or other sources. However, the Marine Corps has also been increasing the number of officers commissioned from USNA. The Marine Corps does not have a separate ROTC program and instead, commissions officers through the Navy ROTC program. Our independent review and analysis of data and other materials from the commissioning sources and Marine Corps headquarters identified some areas where the Marine Corps was challenged to access newly commissioned officers for some occupational specialties. While the Marine Corps officials stated that they were challenged in accessing enough naval flight officers because officer candidates were not familiar with the position (which involves assisting pilots with aircraft and weapons systems), the service still recruited the number it needed based upon our examination of the data. The Navy also reported meeting its overall needs for commissioned officers during FYs 2001, 2003, and 2005. Since FY 2001, the total number of newly commissioned officers decreased from 4,784 to 3,506, a decline of nearly 27 percent (see table 3). A large portion of that decrease was accomplished by reducing the number of officers being commissioned through OCS, the program that can most easily and quickly be altered to reflect changing demands for producing commissioned officers. Despite generally meeting its overall accession needs for newly commissioned officers, the Navy experienced accession challenges in some specialty areas. Our independent review and analysis of data and other materials from the commissioning sources, Navy headquarters, and accession programs identified some areas where there were gaps between the numbers of newly commissioned officers needed and the numbers supplied to specialties by some of the commissioning programs. For example, USNA did not meet its quota for submarine officers in FY 2005, but other commissioning programs were able to compensate for the shortfall. Like the Marine Corps, the Navy faced a challenge in accessing enough naval flight officers, but the Navy met its overall need for newly commissioned officers by shifting the number of officers sent to that specialty by some commissioning sources. For example, Navy ROTC met its goal for naval flight officers in FY 2005 but not FY 2001 and FY 2003. The Navy’s OCS made up the difference in those years. According to Navy officials, some officers who may previously have gone into this specialty because of poor eyesight have their vision surgically corrected and instead become pilots. Like the Marine Corps and the Navy, the Air Force generally met its overall officer accession needs for FYs 2001, 2003, and 2005. As with the Navy, the Air Force decreased the number of newly commissioned officers in FY 2005 (see table 4). Specifically, the Air Force commissioned over 1,000 fewer officers in FY 2005 than it did in FY 2003, and it is working toward a plan to have about 9,000 fewer officers servicewide by FY 2011. The recent decrease in the number of newly commissioned Air Force officers was largely accomplished by commissioning fewer officers from OTS. Overall, the Air Force relied on its ROTC scholarship program for most of its officers and provided scholarships for the vast majority of the ROTC officer candidates. Despite meeting its overall needs for newly commissioned officers, the Air Force encountered challenges in some specialties. Our analyses and discussions with Air Force accessions officials identified air battle manager as an area where the Air Force has been challenged. USAFA expected to provide the Air Force with 10 air battle managers in FY 2005, but instead, three USAFA graduates became air battle managers. The other seven positions were filled by Air Force ROTC. All of the services have experienced problems accessing enough medical professionals, including physicians, medical students, dentists, and nurses. The Army, Navy (which supplies the Marine Corps), and Air Force provide direct commissions to medical professionals entering the service. Physicians. All of the services had difficulties meeting their accession needs for physicians (see table 5) in at least 2 of the 3 fiscal years that we examined. The Army and the Navy achieved 91 or more percent of their goals in each year studied, while the Air Force achieved 47 to 65 percent of its goal during the same 3 years. For each year, the Air Force had a higher goal than the other two services but accessed fewer physicians. Our review of the numbers of medical students participating in the services’ Health Professions Scholarship Program showed that additional physician-accession problems may appear in future years (see table 6). The services set their goals for awarding the scholarships based on their needs for fully trained medical professionals in the future. A medical student who accepts a scholarship will be commissioned into a military service upon completion of graduate school. While each service awarded scholarships to a sufficient number of the medical students who began their 4-year training in FY 2003 and will be ready for an officer commission upon graduation in FY 2007, the Army and Navy did not achieve their goals for awarding scholarships in FY 2005, and they may not access enough physicians in FY 2009. Dentists. Similar to the situation with physicians, the services have been challenged to access enough dentists in recent years (see table 7). No service met its goals for recruiting dentists in FYs 2001, 2003, or 2005. Both the Army and the Air Force, however, accessed more dentists in FY 2005 than they had 2 years before, and the Air Force showed improvement in FY 2005 over their FY 2003 accessions. Nurses. All of the services have struggled to access enough nurses (see table 8). Although the Navy exceeded its goal for accessing nurses in 2001, no service achieved its goal for any other period. In FY 2005, the services accessed a total of 738 of the 975 nurses (about 76 percent) that they needed. While some service officials have stated that medical professional recruiting is challenging because of concerns over overseas deployments, other service officials told us that it is also affected by the lack of income parity compared to the civilian sector. As part of the John Warner National Defense Authorization Act for Fiscal Year 2007, Congress approved an increase in the recruiting bonus for fully trained physicians and dentists, allowed the services to detail commissioned officers to attend medical school, extended the authority for undergraduate student loan repayment for medical professionals, increased the financial benefits student may receive as part of the Health Professions Scholarship Program, and required the services to report to Congress on this program and their success in meeting the scholarship program’s goals. Another step that DOD has taken to reduce the medical professional shortfalls is to convert uniformed medical positions to positions occupied by civilian medical professionals. In addition, DOD is considering asking for legislative authority to shorten the service commitment for medical professionals from the required 8 years of service on active or reserve duty, to encourage more medical professionals to join the military. However, these efforts have not yet been funded and their effect on medical recruiting is uncertain. All services had problems accessing newly commissioned minority officers to meet DOD’s goal of maintaining a racially and ethnically diverse officer corps. For every service, African Americans were a smaller percentage— by either 1 or 2 percentage points—of the accessed officers in FY 2005 than they were in FY 2003, but the representation of Asians/Pacific Islanders increased between the same two periods for every service except the Navy (see table 9). As points of comparison, we noted in a September 2005 report that the representation of African Americans in the officer corps DOD-wide was about 9 percent, as was the representation of African Americans in the college-educated workforce. Therefore, the percentages shown in the table indicate that only the Army met or exceeded the African-American DOD-wide and college-educated- workforce representation levels. Similarly, recruiting Hispanic officers has presented challenges to the services. In FY 2005, the Marine Corps accessed a higher percentage of Hispanic officers than the other services. While the Air Force accessed a lower percentage than the other services in each of the 2 fiscal years reported, it doubled its percentage of newly commissioned Hispanic officers from FY 2001 to FY 2003. However, this percentage of Hispanic officers accessed is smaller than the percentage of Hispanics in the United States at the time of the 2000 census (about 13 percent) and the percentage of Hispanics in the U.S. college population (about 9 percent). Some ambiguity is present in interpreting the findings for racial and ethnic groups because of the data. For example, the Air Force findings show large numbers of officers for whom some data were not available. Despite these data limitations, service officials explained that many of their challenges relate to the need for the services to recruit minority officers from the military-eligible segment of the college population. Navy and Air Force officials stated that their officer commissioning programs have more stringent entrance requirements than the other services and emphasize mathematics and science skills needed for the high-technology occupations found in their services. Officials from the commissioning programs in each service further noted that only a small segment of the African-American college population meets these entrance requirements. Each service operates a preparatory school in association with its academy to increase the pool of qualified applicants to enter its academy, giving primary consideration to enrolling enlisted personnel, minorities, women, and recruited athletes. Moreover, all officer commissioning programs, particularly the service academies, must compete with colleges and universities that do not require a postgraduation service commitment. In addition, USMA officials stated that citizenship status represented a barrier to improving the percentage of Hispanic officers. As of the 2000 census, 65 percent of Hispanics were U.S. citizens. While all of the services experienced some specialty- and diversity-related challenges in FYs 2001, 2003, and 2005, based on our review the Army faces some future officer accession problems not shared by the other services and has not developed and implemented a strategic plan to overcome these projected shortfalls. Our review, analyses, and discussions with Army officials indicated that the Army may struggle to meet its future accession needs. While all the services are contributing forces to operations in Iraq and Afghanistan, the Army is providing most of the forces for these operations. Other unique stressors on the Army’s commissioning programs include the expansion of the Army’s officer corps as part of the congressionally authorized 30,000-soldier increase to the Army end strength and the Army’s need for higher numbers of officers as part of its ongoing transformation effort to create more modular quickly deployable units. Notwithstanding these needs for more officers, some of the Army’s commissioning programs are not commissioning as many officers as they had in past years and are commissioning less than the Army had expected. The Army’s current approach is to first focus on its ROTC program and academy to meet its officer accession needs, and then compensate for accession shortfalls in these programs by increasing OCS accessions. While Army OCS is currently meeting the Army’s needs, Army ROTC and USMA are not. Army ROTC, for example, experienced a decline in its number of participants. In FY 2006, the Army calculated that 25,089 students would participate in ROTC. In contrast, 31,765 students were involved in Army ROTC in FY 2003. Army officials stated that to meet their current mission they need at least 31,000 participants in the program. Moreover, the Army uses its ROTC program for commissioning both active and reserve officers. Although the goal is 4,500 newly commissioned officers (2,750 active and 1,750 reserve) from Army ROTC in both FYs 2006 and 2007, Army officials project that the program will fall short of the goal by 12 percent in FY 2006 and 16 percent in FY 2007. Furthermore, fewer officers may be commissioned from the Army’s ROTC program in the future because fewer scholarships have been awarded recently, which Army officials attribute to budget constraints. For example, in FY 2003, the Army ROTC program had 7,583 officer candidates with 4-year scholarships; in FY 2004, 7,234; in FY 2005, 6,004. Army ROTC officials stated that fewer 4-year scholarship recipients means fewer newly commissioned officers in the future, since scholarship recipients are more likely to complete the program and receive their commission. Army ROTC officials believe that while negative attitudes toward Army ROTC are increasing on college campuses because of opposition to operations in Iraq, concerns about financing their education may make ROTC scholarships more attractive to officer candidates. In addition to challenges with its ROTC program, the Army has recently experienced difficulties commissioning officers through USMA, and projections for newly commissioned officers from USMA show that these difficulties may continue in the future. In FY 2005, USMA commissioned 912 officers, fewer than its mission of 950 officers. Similarly, USMA’s class that graduated in FY 2006 commissioned 846 graduates, short of the Army’s goal of 900. While the number of officer candidates who successfully complete the 4-year program at USMA varies, according to USMA data 71 percent who began the program in 2002 completed it in 2006 and received their commission. In contrast, in both FY 2001 and in FY 2003, 76 percent of those who began their course of study 4 years earlier completed the program and commissioned into the Army; and in FY 2005, 77 percent. USMA officials told us that the smaller graduating class in FY 2006 may be the result of ongoing operations in Iraq. The class, which will graduate in 2010, should have an additional 100 officer candidates to help address recent shortfalls; however, USMA officials indicated that facilities and staff limit additional increases. Commissioning shortfalls at USMA and in the Army ROTC program, as well as the Army’s need to expand its new officer corps, have required OCS to rapidly increase the number of officers it commissions; however, its ability to annually produce more officers is uncertain. In FY 2006, OCS was required to produce 1,420 officers, and in FY 2007, the Army’s goal for OCS is to commission 1,650 officers, more than double the number it produced in FY 2001. OCS program officials stated that without increases in resources and support such as additional housing and classroom space, OCS cannot produce more officers than 1,650 officers, its FY 2007 goal, limiting the viability of this approach. Additionally, the Army’s officer accession programs are decentralized and lack any sort of formal coordination, which prevents the Army from effectively balancing the results of failure in some officer accession programs. USMA does not directly report to the same higher-level command as ROTC or OCS. While ROTC and OCS both report to the same overall authority, they do not formally coordinate with one another or with USMA. For example, the Army does not coordinate recruiting and accession efforts to ensure that accession programs meet Army accessions goals, nor does it use risk analysis to manage resource allocations among the programs. USMA relies on its own full-time recruiters and Military Academy Liaison Officers—reservists, retirees, and alumni who meet with possible academy recruits and hold meetings to provide information to students. Officials from Army Cadet Command, which does not coordinate recruiting efforts with USMA, stated that Army ROTC has a limited advertising budget that focuses on print media, brochures, and local print media. In addition, as we previously discussed, Army ROTC has experienced a decrease in its scholarship funding while the Army’s needs for its graduates has increased, but the Army has not conducted a risk- based analysis of resource allocations to Army officer accession programs. Shortfalls in Army officer accessions have been compounded by the decentralized management structure for the officer accessions programs, and the Army does not have a strategic plan to overcome these challenges. Army personnel officials set a goal for each commissioning program. While those officials attempt to ensure that any commissioning shortfalls (program outputs) are covered by other commissioning programs such as OCS, the Army does not coordinate the recruiting efforts of its various commissioning programs (the input to these programs) to ensure that officer accession programs meet overall Army needs. While the Army’s has identified a number of options to increase officer accessions, it does not have a strategic plan for managing its shrinking accessions pipeline at a time when the force is expanding and its needs for commissioned officers are increasing. The Government Performance and Results Act of 1993 (GPRA) and Standards for Internal Control in the Federal Government provide federal agencies with a results-oriented framework that includes developing a strategic plan. According to GPRA, a strategic plan should include outcome-related goals and objectives. Moreover, the Standards emphasize the need for identifying and analyzing potential risks that could slow progress in achieving goals. This risk assessment can form the basis for determining procedures for mitigating risks. The Army recognizes that offering more scholarships could improve its ROTC program accessions and has proposed increasing available scholarships. However, this is not part of a broader strategic plan that would realign resources to better meet the Army’s officer accession needs and minimize risk. Without such an alternative, given the decentralized management of the officer accession programs, and without a strategic plan that identifies goals, risks, and resources to mitigate officer shortfalls, the Army’s ability to meet future mission requirements is uncertain. While most of the services generally met their past officer retention needs, the Army faces multiple retention challenges. The Army has experienced decreased retention among officers early in their careers, particularly among junior officers who graduated from USMA or received ROTC scholarships. Moreover, the Army is experiencing a shortfall of mid-level officers because it commissioned fewer officers 10 years ago due to a post- Cold War reduction in both force size and officer accessions. Despite these emerging problems, the Army has not performed an analysis that would identify and analyze risks of near term retention problems to determine resource priorities. Although the other services generally met their past retention needs, each faces challenges retaining officers in certain ranks or specialties. Furthermore, each of the services had high continuation rates among African American and Hispanic officers, but each faces challenges retaining female officers. The Army has encountered retention challenges in the last few years, but the other services are generally retaining sufficient numbers of officers in the fiscal years that we examined. Overall, the Army has experienced decreased retention among officers early in their careers, particularly junior officers who graduated from USMA or received ROTC scholarships. Additionally, the Army is currently experiencing a shortfall of mid-level officers and has shortages within certain specialty areas. It is examining a number of initiatives to improve the retention of its officers, but these initiatives are not currently funded or will not affect officer retention until at least FY 2009. Moreover, the Army does not have a strategic plan to address these retention challenges. The Army has experienced multiple retention problems in recent years for officers commissioned through USMA and the ROTC scholarship program and for some occupational specialties despite retaining lieutenants and captains in FY 2006 at or above its 10-year Army-wide average. Our comparisons of the Army continuation rates shown in table 10 to those presented later for each of the other services revealed that the USMA continuation rates of 68 percent for FY 2001 and 62 percent for FY 2005 were 20 to 30 percentage points lower than the other academies’ continuation rates for the same fiscal year. Caution is needed, however, when interpreting cross-service findings because USNA and USAFA produce a large number of pilots who incur additional obligations that may not allow many of those officers to leave until 8 or more years of service have been completed. Second, a comparison of the Army’s FY 2001 and FY 2005 continuation rates for ROTC scholarship officers showed that rates decreased by 3 percentage points at years 4 and 5. Our review of the continuation rates in table 10 also revealed three other notable patterns. First, the total continuation rate for FY 2003 was higher than the rate for the other 2 years, reflecting the stop-loss policy that prevented officers from leaving the Army. Second, for each source and fiscal year, the lowest continuation rate for a commissioning source typically came in the first year that officers were eligible to leave the military—for example, year 5 for USMA and year 4 for ROTC scholarship. Third, since (1) the ROTC scholarship program produces more officers than any other commissioning source and (2) scholarship officers are eligible to leave the Army at year 4, that year of service had the lowest or next lowest total continuation rate for all 3 of the fiscal years that we examined. The Congressional Research Service reported that Army projections show that its officer shortage will be approximately 3,000 line officers in FY 2007, grow to about 3,700 officers in FY 2008, and continue at an annual level of 3,000 or more through FY 2013. For example, the Army FY 2008 projected shortage includes 364 lieutenant colonels, 2,554 majors, and 798 captains who entered in FYs 1991 through 2002. The criteria that the Army uses to determine its retention needs are personnel-fill rates for positions, based on officers’ rank and specialty. In addition to the general problem of not having enough officers to fill all of its positions, the Army is promoting some junior officers faster than it has in the recent past and therefore not allowing junior officers as much time to master their duties and responsibilities at the captain rank. For example, the Army has reduced the promotion time to the rank of captain (O-3) from the historical average of 42 months from commissioning to the current average of 38 months and has promoted 98 percent of eligible first lieutenants (O-2), which is more than the service’s goal of 90 percent. Likewise, the Army has reduced the promotion time to the rank of major (O-4) from 11 years to 10 years and has promoted 97 percent of eligible captains to major—more than the Army’s goal. Also, the Army is experiencing a large shortfall at the rank of major, and the shortfall affects a wide range of branches. For FY 2007, the Army projects that it will have 83 percent of the total number of majors that it needs. Table 11 shows that the positions for majors in 14 Army general specialty areas (termed branches by the Army) will be filled at 85 percent or less in FY 2007—a level that the Army terms a critical shortfall. Numerous factors may have contributed to the retention challenges facing the Army. Among other things, Army officials noted that some of the shortfalls originated in the post-Cold War reduction in forces and accessions. Although Congress has increased the authorized end strength of the Army by 30,000 since FY 2004 to help the Army meet its many missions expanding the mid-level officer corps could prove problematic since it will require retaining proportionally more of the officers currently in the service, as well as overcoming the officer accession hurdles that we identified earlier. Unlike civilian organizations, the Army requires that almost all of its leaders enter at the most junior level (O-1) and earn promotions from within the organization. Additionally, as part of our September 2005 report, the Office of Military Personnel Policy acknowledged that retention may have suffered because of an improving civilian labor market and the high pace of operations. Army officers may have already completed multiple deployments in Iraq and Afghanistan since the Army is the service providing the majority of the personnel for those operations. Another reason why the Army may be having more difficulty than other services in retaining its officers could be related to its use of continuation pays and incentives. Table 12 shows that the Army spent less than any other service in FY 2005 on retention-related pays and incentives for officers. While the Army has identified some steps that it needs to take in order to improve officer retention, the actions that have been implemented will have no immediate effect on retention. The Army has begun guaranteeing entering officers their postcommission choice of general specialty area (branch), installation, or the prospect of graduate school to encourage retention. A number of Army officers commissioned in FY 2006 took advantage of this initiative, and as a result, have a longer active duty service obligation. For example, as of May 2006, 238 academy graduates accepted the offer of a longer service obligation in exchange for the Army paying for them to attend graduate school. Although the Army believes that these initiatives will help address future retention problems, none will affect continuation rates until 2009 at the earliest because servicemembers are obligated to stay in the Army for at least 3 years. The more immediate retention challenge for the Army is keeping officers with 3, 4, or 5 years of service, as we have identified in this report. However, these officers are not affected by these initiatives. While the Army staff reported that they are exploring numerous options for addressing officer retention shortfalls, Army leadership has not identified which options will be funded and implemented. As noted earlier in this report, GPRA and the Standards for Internal Control in the Federal Government provide a basis for developing a results-oriented strategic plan. Moreover, GAO’s guidance for implementing a results- oriented strategic plan highlights the importance of for ROTC scholarship identifying long-term goals and including the approaches or strategies needed to meet these goals. Without a plan to address both its accession and retention challenges, the Army will not have the information and tools it needs to effectively and efficiently improve its retention of officers in both the near term and beyond. The Marine Corps, Navy, and Air Force generally met their retention needs and had higher continuation rates from their major accession programs than did the Army. While the Navy and Air Force are currently undergoing force reductions that will decrease the size of their officer corps, all three services face officer retention challenges in certain ranks and specialties. The Marine Corps was able to meet its overall retention needs for FYs 2001, 2003, and 2005 by generally retaining more than 9 of every 10 officers at the four career-continuation points that we examined. Except for the 4- year career mark, our analysis showed that the Marine Corps’s total continuation rates for all 3 fiscal years typically exceeded 90 percent (see table 13). Officers who graduated from USNA had the lowest continuation rates at the end of their fifth year of service, coinciding with the minimum active duty service obligation for that commissioning source. Likewise, officers from ROTC scholarship programs had lower continuation rates at the end of year 4. For example, in FY 2003, the continuation rate was 67 percent; and in FY 2005, it was 79 percent. With a few exceptions, the Marine Corps met its retention needs and was able to fill critical specialties and ranks. We found that the Marine Corps was either under or just meeting its goal for fixed wing aviators (such as the junior officer level for the KC-130 tactical airlift airplane commanders and the AV-8 Harrier attack aircraft), rotary wing officers (at the junior officer level for all rotary wing occupations except one), and mid-level and senior intelligence, administrative, and communications officers in past fiscal years. Additional problems were present when we examined FY 2006 continuation data for emerging problems. Although the FY 2006 continuation rate averaged about 92 percent—excluding the fixed and rotary wing communities—the Marine Corps experienced lower than normal retention among combat support officers (such as administrative and financial management officers), combat arms officers (such as infantry, field artillery, and tank officers) as well as communications, logistics, and human source intelligence officers. However, FY 2007 projections for these categories of jobs averaged about a 90 percent continuation rate, excluding fixed wing and rotary wing communities. While the Navy generally retained sufficient numbers of officers in FYs 2001, 2003, and 2005, Navy officials and our independent review of documents revealed some areas that were not readily apparent solely by reviewing the continuation rates for the total Navy and officers entering through each commissioning program. The continuation rate among Navy junior officers commissioned from USNA or OCS was 90 percent or better in years 3, 4, and 5 of service for all 3 fiscal years studied (see table 14). However, officers commissioned from the Navy ROTC scholarship program had lower continuation rates at the end of 4 and 5 years of service, coinciding with their minimum active duty service obligation. Additionally, the Navy experienced lower continuation rates among officers, both overall and from each of the training programs, after 10 years of service. This lower rate at the 10-year career point may be partially explained because pilots incur additional obligations that may not allow them to leave until 8 or more years of service have been completed. The Navy’s potential future retention challenges may be eased by the flexibility that the Navy gains from not having to retain officers in some specialties at traditional rates since it is going through downsizing. However, our discussions with the officials who manage the Navy general specialty areas (termed officer communities by the Navy) and our independent analyses of retention documents revealed that the medical, dental, surface warfare, and intelligence communities are experiencing junior officer losses, which can later exacerbate mid-level shortfalls. Moreover, several managers of general specialty areas indicated that they were concerned about using individual Navy officers (rather than Navy units) to augment Army and Marine Corps units. The managers were unable to estimate the effect of such individual augmentee assignments on officer retention. These deployments are longer than the Navy’s traditional 6-month deployments and sometimes occur after officers have completed their shipboard deployment and are expecting their next assignment to be ashore with their families. Our review of documents for FYs 2001, 2003, and 2005, as well as our discussions with Air Force officials identified no major past retention problems. Except for the year 3 and 4 career points in FY 2001, the Air Force total continuation rates were 90 percent or higher (see table 15). The Air Force is reducing the size of its officer corps through a planned downsizing. In FY 2006, the Air Force reduced its force by about 1,700 junior officer positions. By 2011, the Air Force plans to complete an approximate 13 percent reduction in the number of its officers, totaling approximately 9,200 officers. The Air Force plans to accomplish the downsizing through the use of force shaping tools such as selective early retirement, voluntary separation pay, and other measures. Despite the need to retain fewer officers, the Air Force anticipates shortages in three specialties areas—control and recovery officers who specialize in recovering aircrews who have abandoned their aircraft during operational flights, physicians, and dentists. Staffing levels for these three specialties are just below 85 percent. While the services did well retaining African Americans and Hispanic officers, they did not do as well retaining women. The services want to retain a diverse, experienced officer corps to reflect applicable groups in the nation’s population. For the fiscal years and career points that we examined, African American and Hispanic officers usually had higher continuation rates than white and non-Hispanic officers, respectively; but female officers more often had lower continuation rates than male officers. When we compared the continuation rate of African American officers to that of white officers for a specific fiscal year and career point, our analyses found that the services were typically retaining African Americans at an equal or a higher rate than whites (see table 16). At one extreme, 11 of the 12 comparisons (all except for the FY 2003 3-year point) for the Army officers showed equal or higher rates for African American officers. Similarly, 8 of the 12 comparisons for both the Navy and Marine Corps rates as well as 6 of the 12 Air Force rates showed a similar pattern. Likewise, our analysis showed that the services were typically retaining Hispanic officers better than non-Hispanic officers (see table 17). In all 12 comparisons of the two groups of Army officers at the four career points in the 3 fiscal years, the continuation rates for Hispanic officers were equal to or higher than those for non-Hispanic officers. For 9 of the 12 Navy- based comparisons and 5 of the 12 Marine Corps-based comparisons, the same pattern was present. While the Air Force supplied information on Hispanics and non-Hispanic continuation rates for only FY 2005, the same pattern occurred for 3 of the 4 comparisons. In contrast, our analyses showed that all services encountered challenges retaining female officers. In 11 out of 12 comparisons for both the Army and Navy, our analysis found that male officers continued their active duty service at a higher rate than female officers (see table 18). For 10 of the 12 Air Force-based comparisons and 6 of the 12 Marine Corps-based comparisons, the same pattern was present. Furthermore, each service generally experienced lower continuation rates among its female officers compared with male officers at years 3, 4, and 5 of service. For example, overall, the Navy had the greatest difference in continuation rates between male and female officers who reached years 4 and 5 of service for all fiscal years studied; female officers averaged at least a 9 percentage point lower continuation rate than male officers. Similarly, continuation rates among female Air Force officers averaged almost 7 percentage points lower than the rate for male Air Force officers; among Army female officers, almost 6 percentage points; and among Marine Corps female officers, almost 4 percentage points. Retaining women may be particularly challenging in certain occupational specialties. For example, Navy officials explained that some female surface warfare officers do not view service as a surface warfare officer as compatible with family life and have much less incentive to stay in the Navy even when offered a continuation bonus. DOD officials stated that the behavior of women is different than men because of family considerations, and they said it is not surprising that women have different retention patterns and behavior than men. Retaining female officers at lower rates than male officers in these critical years may result in negative consequences such as having a less diverse cadre of leaders. We have previously reported that DOD has responded positively to most demographic changes by incorporating a number of family-friendly benefits; however, opportunities exist to improve current benefits in this area. DOD and the services are taking steps to enhance the foreign language proficiency of junior officers, but many impediments must be overcome to achieve the language objectives that DOD has laid out for junior officers. For example, to address DOD’s foreign language objectives, the service academies have requested additional funding and teaching positions to improve foreign language training for officer candidates at the academies. However, time demands on officer candidates, the inability to control foreign language curricula at ROTC colleges, hurdles in providing language training after commissioning, and problems in maintaining language skills among officers pose challenges to the services in developing a broader linguistic capacity. DOD has issued guidance and the services have developed plans to achieve greater foreign language capabilities and cultural understanding among officers. In February 2005, DOD published its Defense Language Transformation Roadmap which stated, among other things, that post- September 11, 2001, military operations reinforce the reality that DOD needs to significantly improve its capability in emerging strategic languages and dialects. In July 2005, the Principal Deputy in OUSD (P&R) issued a memorandum that required the services’ assistant secretaries for manpower and reserve affairs and their deputies to develop plans to achieve 2 of the Roadmap’s 43 objectives: develop a recruiting plan for attracting university students with foreign language skills and establish a requirement that junior officers complete added language training by 2013. Specifically, the OUSD (P&R) memo stated that (1) 80 percent of junior officers (O-1 and O-2) will have a demonstrated proficiency in a foreign language by achieving Interagency Language Roundtable Level 1+ proficiency; and (2) 25 percent of commissioned officers (“non-foreign area officers”) will have a Level 2 proficiency in a strategic language other than Spanish or French, with related regional knowledge. The February 2006 Quadrennial Defense Review went further, recommending, among other things, required language training for service academy and ROTC scholarship students and expanded immersion programs and semester- abroad study opportunities. In response to the 2005 OUSD (P&R) memo and the department’s language objectives, the Marine Corps developed a foreign language training plan that discussed the costs of achieving the two objectives and offered an alternative proposal for planning, implementing, facilitating, and maintaining foreign language and cultural skills of Marine officers and enlisted personnel. Other services are still drafting their responses to the OUSD (P&R) memo and DOD’s other language objectives for officers. In addition, the service academies have requested additional funding and positions to expand the foreign language training offered to their officer candidates. USMA already requires all its officer candidates to take two semesters of a language as part of their core curriculum. Beginning with the class that entered in 2005 and will graduate in 2009, USMA will require its officer candidates who select humanities or social science majors to add a third, and possibly a fourth, semester of foreign language study. USMA is also expanding its summer immersion, exchange, and semester- abroad programs in FY 2007 to give more officer candidates exposure to foreign languages and cultural programs. Within the next year, USNA plans to expand the foreign language and cultural opportunities available to its officer candidates by developing foreign language and regional studies majors, adding 12 new regional studies instructors in the political science department, and adding 12 new language instructors in critical languages such as Arabic and Chinese. Starting with the class that will enter in 2007 and graduate in 2011, USAFA will require certain majors to study four semesters of a foreign language. This change will affect about half of the academy’s officer candidates. The rest—primarily those in technical majors like engineering and the sciences—will take at least two semesters of foreign language, though they currently have no foreign language requirement. Some service officials, particularly those associated with commissioning programs, have identified many impediments that could affect future progress toward the foreign language objectives identified by DOD. These impediments include the following: Time demands on officer candidates. Some academy and ROTC program officials expressed concerns about adding demands on the officer candidates’ time by requiring more foreign language credits. Each academy requires its officer candidates to complete at least 137 semester credit hours, in contrast to the approximately 120 semester hours required to graduate from many other colleges. Reductions in technical coursework to compensate for increases in language coursework could jeopardize the accreditation of technical degree programs at the academies. Similarly, some officer candidates in ROTC programs may already be required to complete more hours than their nonmilitary peers. At some colleges, officer candidates may be allowed to count their ROTC courses as electives only. Academy and ROTC officer candidates in engineering and other technical majors may find it difficult to add hours for additional foreign language requirements since accreditation standards already result in students in civilian colleges often needing 5 years to complete graduation requirements. Lack of control over ROTC officer candidates’ foreign language curricula. While one of the objectives outlined by the Principal Deputy of OUSD(P&R) indicated that 25 percent of commissioned officers (non-foreign area officers) will have a Level 2 proficiency in a strategic language other than Spanish or French, ROTC programs do not have control over the languages offered at the colleges where their officer candidates attend classes. For example, out of nearly 761 host and partner Army ROTC colleges, the Army states that only 12 offer Arabic, 44 offer Chinese, and 1 offers Persian Farsi, all languages deemed critical to U.S. national security. Even if the ROTC programs could influence the foreign languages offered, additional impediments include finding qualified instructors and adapting to annual changes to DOD’s list of strategic languages. Moreover, if an officer candidate in ROTC or one of the academies takes a language in college based on DOD’s needs at that time, the language may no longer be judged strategic later in the officer’s career. For example, DOD operations in the Caribbean created a need for Haitian Creole speakers in the 1990s; however, that language may not be as strategic today because of changing operational needs. Language training expensive after commissioning. While language training after commissioning may appear to be an alternative step to help the services achieve DOD’s foreign language objectives, the Marine Corps identified significant costs associated with providing language training after commissioning. Unlike the other services, the Marine Corps obtains the vast majority of its officers through OCS or other, nonacademic sources. The Marine Corps estimated that it would need an end strength increase of 851 officers in order to extend its basic 6-month school of instruction by another 6 months and achieve Level 1+ foreign language proficiency for 80 percent of its junior officers, a stated goal in the OUSD (P&R) memo. It also estimated a one-time $150 million cost for military construction plus $115 million annually: $94.1 million for additional end strength and $21 million for training costs. The estimates for achieving the 25 percent goal for Level 2 proficiency totaled an additional $163 million, largely because of the $104 million associated with an end strength increase of 944 officers. Maintaining foreign language proficiency throughout an officer’s career. Although DOD offers online tools for language maintenance, our prior work has shown the difficulties of maintaining foreign language capabilities. We noted that DOD linguists experienced a decline (of up to 25 percent in some cases) in foreign language proficiency when they were in technical training to develop their nonlanguage skills (such as equipment operation and military procedures). Proficiency could decline if officers do not have an opportunity to use their language skills between the times when they complete their training and are assigned to situations where they can use their skills. Additional foreign language requirements could also have a negative effect on recruiting for the officer commissioning programs. Army, Marine Corps, and Air Force officials expressed concern that the new foreign language requirement may deter otherwise-qualified individuals from entering the military because they do not have an interest in or an aptitude for foreign languages. Service officials also stated that requiring additional academic credits for language study beyond the credits required for military science courses could also be problematic, particularly for nonscholarship ROTC officer candidates who are not receiving a financial incentive for participating in officer training. Since at least 63 percent of Army’s current ROTC officer candidates are not on a ROTC scholarship, officials said that increasing the language requirement could make it more difficult to reach recruiting and accession goals as well as the objective of having 80 percent of junior officers with a minimal foreign language proficiency. At the same time, our recent reports raised concerns about foreign language proficiency in DOD and other federal agencies such as the Department of State. Service officials recognize the impediments to foreign language training and are developing plans to implement DOD’s initiatives. Since many of these problem-identification and action-planning efforts began in the last 2 years, it is still too early to determine how successful the services will be in implementing the foreign language and cultural goals outlined in DOD documents such the Defense Language Transformation Roadmap and the Quadrennial Defense Review; therefore, we believe that it would be premature to make any specific recommendations. While all of the services are challenged to recruit, access, and retain certain types of officers, the Army is facing the greatest challenge. Frequent deployments, an expanding overall force, and a variety of other factors present Army officials with an environment that has made accessing and retaining officers difficult using their traditional management approaches. Moreover, delays in addressing its officer accession and retention shortages could slow the service’s implementation of planned transformation goals, such as reorganizing its force into more modular and deployable units, which require more junior and mid-level officers than in the past. Although the Army has begun to implement some steps that could help with its long-term officer needs, accessing and retaining enough officers with the right specialties are critical issues. Moreover, the limited coordination among the Army’s officer accession programs presents another hurdle in effectively addressing attrition rates at USMA, student participation in ROTC, and resource constraints for OCS. Similarly, the Army has not performed an analysis that would identify and analyze potential risks of continuing retention problems in the near term in order to determine priorities for allocating its resources. Without a strategic plan for addressing its officer shortages, the Army’s ability to effectively and efficiently set goals, analyze risks, and allocate resources could jeopardize its ability to achieve future mission requirements. In order for the Army to maintain sufficient numbers of officers at the needed ranks and specialties, we recommend that the Secretary of Defense direct the Secretary of the Army to develop and implement a strategic plan that addresses the Army’s current and projected accession and retention shortfalls. Actions that should be taken in developing this plan should include developing an overall annual accession goal to supplement specialty- specific goals in order to facilitate better long-term planning, performing an analysis to identify risks associated with accession and retention shortfalls and develop procedures for managing the risks, and making decisions on how resources should best be allocated to balance near- and long-term officer shortfalls. In written comments on a draft of this report, DOD partially concurred with our recommendation. DOD’s comments are included in this report as appendix II. DOD partially concurred with our recommendation to develop and implement a strategic plan that addresses the Army’s current and projected officer accession and retention shortfalls. DOD agreed that the Army does not have a strategic plan dedicated to current and projected officer accessions and retention. DOD said, however, that the Army performs analyses, identifies risk, develops procedures to mitigate risks, and performs other tasks associated with its strategy and planning process for officer accessions and retention. We recognize that these are important tasks, however they are not sufficient to correct the Army’s current and future officer accession and retention problems for the following reasons. First, as noted in our report, these tasks are fragmented, administered in a decentralized manner across multiple Army offices, and lack the integrated, long-term perspective that is needed to deal with the Army’s current officer shortfalls and future challenges. A more strategic, integrated approach would allow the Army to (1) establish long-term, outcome-related program goals as well as integrated strategies and approaches to achieve these goals and (2) effectively and efficiently manage and allocate the resources needed to achieve these goals. Second, some of these tasks are not fully developed. For example, the Army’s procedures for mitigating risk did not address important considerations such as the short- and long-term consequences of not implementing the option and an analysis of how various options could be integrated to maximize the Army’s efforts. Third, with regard to funding—a key element in strategic planning, Army officials indicated that they hope to use supplemental funding to address some of the challenges that we identified, but they also acknowledged that supplemental funding may be curtailed. In recent reports, we too noted our belief that supplemental funding is not a reliable means for decision-makers to use in effectively and efficiently planning for future resource needs, weighting priorities, and assessing tradeoffs. Considering all of the limitations that we have identified in the Army’s current approach, we continue to believe that our recommendation has merit and that an integrated and comprehensive strategic plan is needed. DOD mischaracterized our findings when it indicated our report (1) asserted that Army officer accessions and retention are down and (2) implied that recent decreases in accessions or retention have caused the challenges. On the contrary, our report discussed many factors that contributed to the Army’s officer-related staffing challenges and provided data that even showed, for example, an increase in accessions from FY 2001 to FY 2003 and FY 2005. The first table of our report showed the Army commissioned 6,045 in FY 2005, an increase of 505 from FY 2001 and an increase of 116 from FY 2003. Also, our report provides a context for readers to understand that these increases in accessions would still leave the Army short of officers because of new demands for more officers. Among other things, a larger officer corps is needed to lead a larger active duty force and the reorganization of the force into more modular and deployable units. With regard to retention, our report does not state that overall retention is down. Instead, we document retention by commissioning source, occupation, and pay grade, which revealed shortages that were not readily apparent at the aggregate level. Our report shows that the Army has experienced decreased retention among officers early in their careers, particularly among junior officers who graduated from USMA or received Army ROTC scholarships. Table 11 of our report makes the point by showing which types of occupations were over- and underfilled for officers at the rank of major. We show, for example, that infantry (an occupational group with a large number of officer positions) were overfilled (107 percent), but positions in numerous other occupational groups such as military intelligence (73 percent) were underfilled. Moreover, as with accessions, as the Army grows, it will be required to retain officers at higher than average percentages in order to fill higher pay grades. DOD also provided technical comments that we have incorporated in this report where appropriate. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time we will provide copies of this report to interested congressional committees and the Secretary of Defense. We will also make copies available to others upon request. This report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or other members of the committee have any additional questions about officer recruiting, retention, or language training issues, please contact me at (202) 512-5559 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to the report are listed in appendix III. We limited the scope of our work to the four active duty Department of Defense (DOD) services: Army, Navy, Marine Corps, and Air Force. Also, we examined data for fiscal years 2001, 2003, and 2005 as well as projections for the current year (FY 2006 when we began our work) and future years. FY 2001 data represented the situations present immediately before the terrorist events of September 11, 2001; and FY 2005 data represented the most recent fiscal year for which the services had complete data. FY 2003 data provided information on interim conditions and allowed us to examine the data for trends. To determine the extent to which the services are accessing the numbers and types of commissioned officers required to meet their needs, we reviewed laws and DOD-wide and service-specific officer-management guidance, including Title 10 of the U.S. Code, including provisions originally enacted as part of the Defense Officer Personnel Management Act (DOPMA), defense authorization acts, the 2006 Quadrennial Defense Review, and policies and directives. To gain a firm background on the origin and evolution of the all volunteer force, we studied information in books on the all volunteer force as well as information published by GAO, DOD, Congressional Research Service, Congressional Budget Office, and other organizations such as RAND. We reviewed documents from and obtained the perspectives of officials in Office Under Secretary of Defense for Personnel and Readiness, OUSD (P&R), services’ headquarters, services’ personnel and manpower commands, service academies, Reserve Officer Training Corps commands, and Officer Candidate Schools and Officer Training School commands (see table 19). The documents and meetings with officials allowed us to obtain an integrated understanding of recruitment and accession procedures, the availability of newly commissioned officers to fill positions in the military services, and potential causes and effects of any gaps between the numbers of officers available and the numbers of positions to be filled. We obtained and analyzed accessions and continuation data from DOD’s Defense Manpower Data Center, but our assessment of the data’s reliability identified incorrect information that was severe enough to prevent those data from being used for this report. As a result, we subsequently obtained accession and continuation information from the services. While we did not conduct independent analyses using the services’ databases, we did assess the reliability of their data through interviews and reviewing relevant documentation on service-specific databases. Comparisons of service-provided rates with similar information from other sources—such as information on the number of officer commissioned from the USMA— suggested that the service-provided rates were sufficiently reliable for the purposes of this report. Specifically, we examined information showing the numbers of officers commissioned from the services’ officer programs during FY 2001, 2003, and 2005 for trends and other patterns and compared the numbers of officers accessed to staffing needs. We performed these comparisons with consideration for the specialty, race, ethnicity, and gender of the officers. To assess the extent to which the services are retaining the numbers and types of officers they need, we reviewed laws and DOD-wide and service- specific policies and directives to gain a comprehensive understanding of officer retention. To gain a firm background on officer retention, we examined reports and studies by GAO, DOD, Congressional Research Service, Congressional Budget Office, and other organizations such as RAND. Additionally, we met with a number of DOD officials located at the services’ personnel directorates to obtain an understanding of officer retention missions, goals, historical trends, and projected forecasts for each service. We worked with DOD and service officials to identify differences in the metrics that each service uses to assess retention success, and to review proposed initiatives for enhancing officer retention and to address downsizing efforts. We analyzed documents from and obtained the perspectives of officials in the services’ headquarters, services’ personnel and manpower commands, service academies, ROTC commands, and OCS/OTS commands to obtain an understanding of retention, specifically whether the services are retaining the total numbers they needed as well as the number of officers needed in specific ranks and specialties (see table 20). We obtained and analyzed data provided by service headquarters on officer continuation rates at critical years in an officer’s service. In our calculation of continuation rates, officers were considered as having continued in a year if they were on the rolls on the first day of the fiscal year and the last day. We, in consultation with retention experts from the four services, chose to examine four key points in an officer’s career: years 3, 4, 5, and 10. Years 3, 4, and 5 reflect the minimum active duty service obligation for the major accession programs, that is, the first year an officer could leave the active duty service through resignation. For example, the minimum active duty service obligation is 3 years for OCS graduates and officers who were commissioned by ROTC but did not receive scholarship. Officers who received an ROTC scholarship have an obligation to serve 4 years, and academy graduates must serve at least 5 years. Additionally, some officers who receive specialized training, such as pilots, may be obligated to serve at least a 10- year obligation or 8 years from the completion of pilot training. We also analyzed continuation rates for subgroup differences broken out by occupation, race, ethnicity, and gender. Once we identified particular issues of concern to the service, such as the shortages for mid-level officers in the Army, we explored these issues in further detail. We relied on rates provided by service headquarters because of our previously cited concerns about the Defense Manpower Data Center data. Using the data reliability-assessment procedures discussed for our accessions work, we determined that the data were sufficiently reliability for the purposes of our report. Finally, to assess the steps taken and impediments confronting the services in their attempts to increase foreign language proficiency among junior officers, we reviewed policy materials such as the Quadrennial Defense Review, DOD policies and directives on officer candidate training, curricula for the academies, DOD and service memoranda, reports by GAO and others, and other materials related to language acquisition and maintenance by military personnel and federal employees. We obtained additional perspectives about foreign language issues in meetings with DOD and service officials located in OUSD (P&R), the services’ personnel directorates, service academies, ROTC commands, OCS/OTS commands, and the Defense Language Office. In each instance, we discussed the training programs for officer candidates, the ongoing and proposed steps to increase language proficiency among junior officers, and the challenges these programs face in providing officer candidates with the foreign language and training they need to serve as officers. We conducted our review from September 2005 through November 2006 in accordance with generally accepted government auditing standards. In addition to the contact above, Jack E. Edwards, Assistant Director; Kurt A. Burgeson, Laura G. Czohara; Alissa H. Czyz; Barbara A. Gannon; Cynthia L. Grant; Julia C. Matta; Jean L. McSween; Bethann E. Ritter; Angela D. Thomas; and Adam J. Yu made key contributions to this report. Military Personnel: Reporting Additional Servicemember Demographics Could Enhance Congressional Oversight. GAO-05-952. Washington, D.C.: September 22, 2005. Military Education: Student and Faculty Perceptions of Student Life at the Military Academies. GAO-03-1001. Washington, D.C.: September 12, 2003. Military Education: DOD Needs to Enhance Performance Goals and Measures to Improve Oversight of the Military Academies. GAO-03-1000. Washington, D.C.: September 10, 2003. DOD Service Academies: Problems Limit Feasibility of Graduates Directly Entering the Reserves. GAO/NSIAD-97-89. Washington, D.C.: March 24, 1997. DOD Service Academies: Comparison of Honor and Conduct Adjudicatory Processes. GAO/NSIAD-95-49. Washington, D.C.: April 25, 1995. DOD Service Academies: Academic Review Processes. GAO/NSIAD-95-57. Washington, D.C.: April 5, 1995. DOD Service Academies: Update on Extent of Sexual Harassment. GAO/NSIAD-95-58. Washington, D.C.: March 31, 1995. Coast Guard: Cost for the Naval Academy Preparatory School and Profile of Minority Enrollment. GAO/RCED-94-131. Washington, D.C.: April 12, 1994. Military Academy: Gender and Racial Disparities. GAO/NSIAD-94-95. Washington, D.C.: March 17, 1994. DOD Service Academies: Further Efforts Needed to Eradicate Sexual Harassment. GAO/T-NSIAD-94-111. Washington, D.C.: February 3, 1994. DOD Service Academies: More Actions Needed to Eliminate Sexual Harassment. GAO/NSIAD-94-6. Washington, D.C.: January 31, 1994. Academy Preparatory Schools. GAO/NSIAD-94-56R. Washington, D.C.: October 5, 1993. Air Force Academy: Gender and Racial Disparities. GAO/NSIAD-93-244. Washington, D.C.: September 24, 1993. Military Education: Information on Service Academies and Schools. GAO/NSIAD-93-264BR. Washington, D.C.: September 22, 1993. Naval Academy: Gender and Racial Disparities. GAO/NSIAD-93-54. Washington, D.C.: April 30, 1993. DOD Service Academies: More Changes Needed to Eliminate Hazing. GAO/NSIAD-93-36. Washington, D.C.: November 16, 1992. DOD Service Academies: Status Report on Reviews of Student Treatment. GAO/T-NSIAD-92-41. Washington, D.C.: June 2, 1992. Service Academies: Historical Proportion of New Officers During Benchmark Periods. GAO/NSIAD-92-90. Washington, D.C.: March 19, 1992. DOD Service Academies: Academy Preparatory Schools Need a Clearer Mission and Better Oversight. GAO/NSIAD-92-57. Washington, D.C.: March 13, 1992. Naval Academy: Low Grades in Electrical Engineering Courses Surface Broader Issues. GAO/NSIAD-91-187. Washington, D.C.: July 22, 1991. DOD Service Academies: Improved Cost and Performance Monitoring Needed. GAO/NSIAD-91-79. Washington, D.C.: July 16, 1991. Review of the Cost and Operations of DOD’s Service Academies. GAO/T- NSIAD-90-28. Washington, D.C.: April 4, 1990.
Accessing and retaining high-quality officers in the current environment of increasing deployments and armed conflict may be two of the all volunteer force's greatest challenges. The military services use three programs to access officer candidates: (1) military academies, (2) the Reserve Officers' Training Corps (ROTC), and (3) Officer Candidate Schools (OCS). In addition to accessing new officers, the services must retain enough experienced officers to meet current operational needs and the services' transformation initiatives. GAO was asked to assess the extent to which the services are accessing and retaining the officers required to meet their needs. GAO also identified steps that the Department of Defense (DOD) and the services have taken and the impediments they face in increasing officers' foreign language proficiency. For this report, GAO examined actual accession and retention rates for officers in fiscal years (FYs) 2001, 2003, and 2005 as well as projections for later years. Also, GAO reviewed documents on foreign language training and plans. The services generally met most of their overall accession needs for newly commissioned officers, but the Army faces challenges accessing enough officers to meet its needs. The Marine Corps, Navy, and Air Force met their overall FYs 2001, 2003, and 2005 officer accession needs, but are experiencing challenges accessing specific groups, like flight officers and medical professionals. Moreover, the Army did not meet its needs for officers in FY 2001 and FY 2003 and expects to struggle with future accessions. To meet its officer accession needs, the Army's traditional approach has been to rely first on its ROTC and academy programs and then compensate for shortfalls in these programs by increasing its OCS accessions. Between FYs 2001 and 2005, the Army nearly doubled the number of OCS commissioned officers due to (1) academy and ROTC shortfalls, (2) decreased ROTC scholarships, and (3) a need to expand its officer corps. But OCS is expected to reach its capacity in FY 2007, and resource limitations such as housing and classroom space may prevent further expansion. In addition, the Army's three accession programs are decentralized and do not formally coordinate with one another, making it difficult for the Army, using its traditional approach, to effectively manage risks and allocate resources across programs in an integrated, strategic fashion. Without a strategic, integrated plan for determining overall annual accession goals, managing risks, and allocating resources, the Army's ability to meet its future mission requirements and to transform to more deployable, modular units is uncertain. All of the services except the Army generally met their past overall officer retention needs. The Army, which continues to be heavily involved in combat operations in Iraq and Afghanistan, faces many retention challenges. For example, the Army is experiencing a shortfall of mid-level officers, such as majors, because it commissioned fewer officers 10 years ago due to a post-Cold War force reduction. It projects a shortage of 3,000 or more officers annually through FY 2013. While the Army is implementing and considering initiatives to improve officer retention, the initiatives are not integrated and will not affect officer retention until at least 2009 or are unfunded. As with its accession shortfalls, the Army does not have an integrated strategic plan to address its retention shortfalls. While the Army is most challenged in retaining officers, the Marine Corps, Navy, and Air Force generally met their retention needs in FYs 2001, 2003, and 2005; but each experienced challenges in occupational specialties such as medical officers. DOD and the services are taking steps to enhance the foreign language proficiency of junior officers, but many impediments must be overcome to achieve the language objectives that DOD has laid out for junior officers. For example, academy and ROTC officer candidates already have demanding workloads and ROTC does not control curricula at host institutions. The services recognize these impediments and are drafting plans to implement DOD's foreign language objectives.
Gulf Coast oysters are commercially harvested from the waters of the Gulf of Mexico adjacent to Alabama, Florida, Louisiana, Mississippi, and Texas and shipped throughout the United States. Figure 1 shows the Gulf Coast states and the location of the primary oyster harvest areas in the Gulf of Mexico. According to statistics from the Department of Commerce’s National Oceanic and Atmospheric Administration (NOAA), in 2009, the Gulf Coast region produced about 23 million pounds of oysters, approximately 63 percent of the nation’s total domestic production, valued at about $72 million. Figure 2 shows the amount and value of oysters harvested by Gulf Coast states in 2009, the most recent year for which these data are available. Because V. vulnificus is more abundant in oysters harvested during the warmer-weather months (April through November), consumers who eat raw oysters harvested during this period are likely to be exposed to greater amounts of V. vulnificus. Although most healthy people do not become ill from V. vulnificus, people with certain medical conditions— such as chronic liver disease, hemochromatosis, cancer, kidney disease, diabetes, and human immunodeficiency virus/acquired immune deficiency syndrome—are at risk of developing a potentially fatal bloodstream infection known as septicemia, which is characterized by fever and chills, life-threatening low blood pressure, and blistering skin lesions. Figure 3 shows that V. vulnificus consumption-related illnesses peak during April through November and remain quite low from December through March. According to the ISSC’s data, since 2000, 348 V. vulnificus consumption- related illnesses have been reported nationally. As figure 4 shows, the number of V. vulnificus consumption-related illnesses reported nationally from 2000 to 2010 have been relatively consistent annually—with the exception of 2005 and 2010, when Hurricane Katrina and the Deepwater Horizon oil spill, respectively, severely reduced the oyster harvest. Although the number of V. vulnificus consumption-related illnesses is small, the costs of the disease are high because of the high mortality rate—about 50 percent, according to CDC—costing the nation about $124 million annually, according to FDA. However, a senior ISSC official said FDA’s estimate overstates the annual costs related to V. vulnificus consumption-related illnesses because it does not factor in the age and pre-existing health condition of the victims. As the federal agency responsible for ensuring the safety of shellfish, including oysters, in March 1984, FDA entered into a memorandum of understanding with the ISSC recognizing it as the primary voluntary national organization of state shellfish regulatory officials that provides guidance and counsel on matters related to the sanitary control of shellfish. The ISSC provides a formal structure for state regulatory authorities to establish guidelines, and procedures for applying those guidelines, for the sanitary control of the oyster industry. These guidelines must be reviewed by FDA for consistency with existing laws, regulations, and policies before they can be adopted. In addition to FDA and state regulatory officials, the ISSC also includes members from the shellfish industry and other federal agencies. Postharvest processing, closing oyster harvest areas, and shucking can all be expected to either substantially reduce or essentially eliminate exposure to V. vulnificus bacteria by consumers of raw oysters. However, when the 60 percent illness rate reduction goal was not met by the end of 2008, instead of implementing these strategies, FDA and the ISSC relied on estimates generated by FDA’s V. vulnificus risk calculator in adopting time and temperature controls that they considered to be an equivalent strategy. Senior FDA and ISSC officials told us that although time and temperature controls are not equivalent to the other strategies in the guidelines in terms of the total amount of illness reduction each can achieve, they considered the new time and temperature controls to be equivalent in that, according to the risk calculator’s estimations, they would equivalently help the states to achieve the approximately 25 percent illness rate reduction needed to meet the 60 percent goal by the end of 2010. Table 1 shows the time and temperature controls implemented in Florida, Louisiana, and Texas on May 1, 2010. Although FDA had concurred with the use of new time and temperature controls earlier in 2009, in October of that year, a senior FDA official stated that the agency would require postharvest processing to reduce V. vulnificus to nondetectable levels. There are currently four methods for processing oysters after they have been harvested to reduce V. vulnificus to nondetectable levels: (1) high-pressure processing, (2) a mild heat treatment known as cool pasteurization, (3) cryogenic quick freezing, and (4) irradiation. Each of these processes—except irradiation—is currently in limited, voluntary commercial use in the Gulf Coast region. The senior FDA official indicated that the postharvest processing requirement would apply to all Gulf Coast oysters harvested during the warmer months of the year beginning with the 2011 harvest season. However, in response to concerns expressed by some members of Congress and the ISSC, among others, FDA suspended its plan to require postharvest processing until a study was done to determine how postharvest processing can be implemented in the fastest, safest, and most economical way. In 2010, FDA contracted with RTI to study the feasibility and economic impacts of requiring postharvest processing of Gulf state (Alabama, Florida Louisiana, Mississippi, and Texas) oysters harvested from April through October and intended for raw consumption. In October 2009, a senior FDA official announced in a speech before the ISSC that, under FDA’s Hazard Analysis and Critical Control Point rules, beginning in May 2011, FDA intended to require postharvest processing of all Gulf Coast oysters harvested during warmer months, when higher levels of V. vulnificus are more likely to be present, to reduce V. vulnificus to nondetectable levels. According to FDA officials, the agency took this action for two primary reasons. First, consumer education activities and time and temperature controls, which had been in use by Louisiana, Florida, and Texas since 2001, had not achieved the 60 percent goal by the 2008 deadline. Second, validated methods of postharvest processing technology had become available. FDA noted that in California, since the state began requiring postharvest processing of Gulf Coast oysters in 2003, there had been zero consumption-related V. vulnificus illnesses. A senior FDA official said FDA now believes that postharvest processing of oysters is the control measure that best meets the intent of its Hazard Analysis and Critical Control Point seafood safety requirement to prevent, eliminate, or reduce to an acceptable level the occurrence of pathogens such as V. vulnificus. In a November 2009 letter to FDA, the ISSC expressed disappointment that FDA had unilaterally decided to announce its intent to change its policy and had not followed the 1984 memorandum of understanding that calls for FDA and the ISSC to exchange information concerning the shellfish safety program and resolve problems of interpretation and policy. According to the letter, the ISSC was concerned that FDA was now proposing to abandon the V. vulnificus risk management plans adopted in 2001 by the ISSC with FDA concurrence. Furthermore, the ISSC, with FDA’s concurrence, had already agreed to implement new time and temperature controls to address V. vulnificus beginning in May 2010. The ISSC letter also stated that if FDA continued its effort without ISSC support, it was likely that many Gulf Coast states would choose not to exercise their enforcement responsibilities under the shellfish safety program with regard to postharvest processing, and instead might implement intrastate programs that could allow consumption of raw oysters produced within their state without the controls necessary to substantially reduce V. vulnificus illnesses. In its April 2010 response to the ISSC, FDA acknowledged the ISSC’s concerns and agreed to work collaboratively with it to identify the steps needed before implementing a postharvest processing requirement for Gulf Coast oysters harvested during the warmer months. Specifically, FDA agreed to fund an independent study, which RTI later conducted, to assess how postharvest processing and equivalent controls could be implemented in the fastest, safest, and most economical way. Nevertheless, FDA and the ISSC have not yet agreed on a new illness reduction goal and the strategies for achieving that goal. As we noted in our October 2005 report on practices that can help agencies enhance and sustain collaboration, agencies need to define and articulate the common outcome they are seeking to achieve that is consistent with their respective agency goals and missions. Also, to achieve the common outcome, collaborating agencies need to establish strategies that work in concert with those of their partners or are joint in nature. Furthermore, trust is a necessary element for a collaborative relationship and it is critical to involve all key stakeholders in decision-making. In summary, our October 2005 report indicates that absent effective collaboration, it is unlikely that agencies can develop and implement joint agreements. If FDA and the ISSC cannot agree on the V. vulnificus illness reduction goal and strategies to achieve the goal, it is unlikely that the states’ efforts to significantly reduce the number of consumption-related V. vulnificus illnesses will be effective. The approach FDA and the ISSC have been using to measure progress toward their 60 percent illness rate reduction goal established in 2001 has three main limitations that undermine its credibility: the limited number of states used in determining V. vulnificus illness reduction, overstatement of the effectiveness of the primary V. vulnificus illness reduction strategies, and not controlling for the effect of such factors as natural and man-made disasters. Limited number of states used in determining V. vulnificus illness rate reduction. First, the approach FDA and the ISSC use for measuring progress toward their illness rate reduction goal is based on the inclusion of V. vulnificus illness data from four states: California, Florida, Texas, and Louisiana. V. vulnificus illnesses related to raw oyster consumption occur in other states, but the FDA and ISSC measurement approach does not capture either the scope of such illnesses from oysters harvested from the entire Gulf Coast region or the national scope of V. vulnificus illnesses. According to a senior ISSC official, the ISSC selected the four states because of the quality of their illness reporting systems since each had been consistently reporting V. vulnificus illnesses for the longest time period and because most other states were not reporting V. vulnificus illnesses. Since 2007, annually, about 20 states have reported V. vulnificus illnesses to CDC. Senior FDA officials told us they advised the ISSC to begin including more states in the V. vulnificus illness calculation to better reflect the occurrence of V. vulnificus illnesses nationally. According to FDA officials, the ISSC has not responded to their recommendation. A senior ISSC official acknowledged to us that analyzing national data would provide a more representative measure of progress toward the illness rate reduction goal than the current approach. The official told us that the ISSC is meeting in October 2011 to discuss, among other things, developing an alternative approach to measuring progress toward the illness rate reduction goal. Overstatement of the effectiveness of primary V. vulnificus illness rate reduction strategies. In addition to not reflecting the national scope of V. vulnificus illnesses, the FDA and ISSC approach overstates the effectiveness of their primary V. vulnificus illness rate reduction strategies—consumer education and time and temperature controls—by including V. vulnificus illness data from California. Since 2003, California has required postharvest processing of all raw Gulf Coast oysters harvested from April through October and sold in the state and has reported two consumption-related V. vulnificus illnesses since the requirement took effect. A senior ISSC official acknowledged that California’s postharvest processing requirement has reduced the number of V. vulnificus illnesses in that state. This official also acknowledged that including California’s results contributed significantly to achieving the interim 2006 40 percent illness rate reduction goal. For this reason, both California and FDA officials have requested that the ISSC no longer include California data in its illness rate reduction calculation. According to a senior ISSC official, however, California data should be included because reporting states should not be excluded based on the states’ chosen V. vulnificus illness rate reduction strategies. Lack of control for the effect of such factors as natural and man-made disasters. The FDA and ISSC measurement approach does not control for the effect of such factors as natural and manmade disasters. FDA, ISSC, and state officials we spoke with agree that the level of V. vulnificus illnesses is associated with the level of oyster production and consumption. When oyster production decreases as a result of factors such as natural or man-made disasters like Hurricane Katrina in 2005 and the Deepwater Horizon oil spill in 2010, the level of oyster consumption also decreases and, with it, the rate of V. vulnificus illnesses. Not controlling for the effect of factors external to the V. vulnificus illness rate reduction strategies chosen by FDA and the ISSC gives a misleading indication of the success of those strategies. In 2000, the ISSC considered a proposal to calculate illness rate as the number of illnesses divided by oyster production. According to a senior ISSC official, the ISSC did not approve the proposal because oyster production data were not readily available, which is no longer the case. After rejecting the 2000 proposal to account for production, in 2001 the ISSC adopted a proposal to calculate the illness rate as the number of illnesses per unit of population. A senior FDA official told us that FDA initially agreed with this proposal because illnesses per unit of population is a standard measure used by CDC for tracking the prevalence of many illnesses. In retrospect, however, FDA and ISSC officials told us that population should not be part of the calculation. A senior FDA official explained that tracking illnesses per unit of population is meaningful for certain types of illnesses but is not meaningful for others. For example, he told us that tracking illnesses per unit of population makes sense for illnesses that are passed from person-to-person or for food-borne illnesses associated with foods that are widely consumed but that it does not make sense for illnesses associated with foods like oysters, which are a specialty food and not widely consumed throughout the population. In 2009, the ISSC adopted a proposal to change its measure of effectiveness from illness rate reduction to risk reduction, which would be based on the risk per serving of raw or undercooked oysters. Under the proposal, the revised goal would be to reduce the risk per serving to a level equivalent to the current 60 percent illness rate reduction goal. FDA initially opposed the proposal but later concurred, stating that the change would eliminate the problems associated with the current approach for measuring V. vulnificus illness rate reduction. In March 2010, the ISSC appointed a workgroup to explore implementation of the proposal. According to a senior ISSC official, as of March 2011, the work group had held one conference call but had not yet determined how the concept of risk per serving would be applied and measured in the V. vulnificus illness context. According to the ISSC official, the proposal is scheduled to be implemented in January 2012. FDA and the ISSC have performed either very limited or no evaluations of the effectiveness of their key V. vulnificus illness reduction strategies. Specifically, the ISSC has not evaluated the effectiveness of consumer education efforts in reducing V. vulnificus illnesses since 2004, and FDA has not conducted any evaluations of its own. In addition, although the V. vulnificus risk calculator developed by FDA estimates that time and temperature controls can reduce V. vulnificus illnesses, FDA and the ISSC have not directly evaluated the effectiveness of the May 2010 time and temperature controls that the ISSC approved, with FDA concurrence, for the states to use in reducing consumption-related V. vulnificus illnesses. The ISSC conducted consumer surveys in 2002 and 2004 that were intended to measure the extent to which (1) V. vulnificus education programs increased consumer awareness of the risks of eating raw oysters and (2) high-risk consumers refrained from eating raw oysters for health reasons. The 2002 survey of raw oyster consumers established baseline information on consumers’ beliefs about raw oysters, consumption patterns, and knowledge of risks associated with eating Gulf Coast raw oysters. The 2004 follow-up survey measured whether raw oyster consumers changed their raw oyster consumption patterns during the previous 2 years as a result of the ISSC’s and states’ (Florida, Louisiana, and Texas) V. vulnificus consumer education efforts. The 2004 survey found no significant increase in overall consumer knowledge about the risk of eating raw oysters or the proportion of high-risk consumers who stopped eating them. A senior FDA official said that the agency has not conducted its own evaluation of the effectiveness of V. vulnificus consumer education efforts; instead it relied on the ISSC’s surveys to determine the impacts of consumer education efforts. FDA officials told us that their review of consumer education efforts is limited to checking the V. vulnificus risk management plans implemented by Florida, Louisiana, and Texas to ensure the plans include a consumer education component. According to FDA and state officials, the states’ V. vulnificus education efforts have included a variety of activities such as online V. vulnificus education courses for physicians, nurses, and dieticians; public service announcements for broadcast on television and radio; advisories included with the drug prescriptions of high-risk consumers; and brochures targeting high-risk consumers that contained information about the risk of eating raw oysters. FDA and ISSC officials stated that although they have not directly evaluated the states’ education efforts since 2004, their indirect measure of the effectiveness of consumer education was whether they achieved their 2008 60 percent illness rate reduction goal. They acknowledged, however, that the goal was not achieved, and, therefore, presumably consumer education alone would not achieve the goal. Some state officials told us that it is very difficult to measure and evaluate the direct impact that consumer education has on a relatively rare event, such as V. vulnificus illness. One state official said that his state did not have the expertise and financial resources to conduct an evaluation of the effectiveness of its consumer education programs. The same official added that it would be difficult to prove that a specific case of V. vulnificus was prevented because of consumer education efforts. An ISSC official said that some members of the ISSC have concluded that consumer education is not going to result in a significant reduction in V. vulnificus illnesses. For example, one state official said that the effectiveness of education is hampered by the fact that some of those who are most vulnerable to V. vulnificus illness, such as alcoholics with liver disease, are risk takers who refuse to change their raw oyster consumption habits. In our September 2005 report on managing for results, we noted that federal agencies should regularly measure the effectiveness of their programs to determine whether progress is being made toward performance goals. Specifically, agencies should compare their programs’ results against their goals and determine where to target program resources to improve performance. We recognize that it is difficult to assess the effectiveness of consumer education programs. Nonetheless, the absence of information on the effectiveness of V. vulnificus consumer education programs limits the ability of the ISSC and the states to identify and increase the use of consumer education approaches that are working well and discontinue those that have not been effective. Furthermore, without regular evaluations of the effectiveness of consumer education, ISSC and state officials cannot ensure that their resources are targeted strategically and are not wasted on efforts that are ineffective. Neither FDA nor the ISSC has directly evaluated the effectiveness of the new time and temperature controls in reducing V. vulnificus illnesses since they were implemented in May 2010. Instead, FDA and the ISSC have relied on illness rate reduction as the overall measure of effectiveness of all V. vulnificus illness reduction strategies combined. Both FDA and ISSC officials acknowledge, however, that doing so does not distinguish the effect of time and temperature controls from that of other factors. Consequently, illness rate reduction does not provide a direct indication of the effectiveness of the time and temperature controls, implemented and enforced by the states, in contributing to V. vulnificus illness reduction. Senior FDA and ISSC officials told us that one way to more directly evaluate the effectiveness of time and temperature controls is to conduct studies to determine the level of V. vulnificus bacteria in oysters prior to and following implementation of the controls. FDA officials told us that such studies were conducted in 1998-1999 and 2007, prior to the implementation of the new time and temperature controls. Those studies surveyed the level of V. vulnificus bacteria and other pathogens in oysters collected from both retail and wholesale establishments. The level of V. vulnificus bacteria found in 2007 was similar to that found in 1998-1999. According to the 2007 study, the similarity was not surprising given that time and temperature controls had not changed since the 1998-1999 study and that the ISSC’s efforts to reduce V. vulnificus illnesses had focused on educating high-risk consumers. FDA officials told us that data from those studies could be compared against future study data to measure the effectiveness of new controls, including time and temperature controls, aimed at reducing exposure to V. vulnificus bacteria by consumers of raw oysters. A senior ISSC official told us that he intends to promote the use of such studies to evaluate time and temperature control effectiveness. FDA officials told us that, although they would like to repeat the 1998-1999 and 2007 studies, FDA has no plans to do so given the expense of the studies, competing priorities, and resource constraints. To estimate the level of V. vulnificus illness rate reduction states might expect to achieve from time and temperature controls, FDA and the ISSC have relied on FDA’s V. vulnificus risk calculator. Estimates generated by the risk calculator indicated that the new time and temperature controls implemented in May 2010 would help the states to achieve the 60 percent illness rate reduction goal by the end of 2010. To achieve the calculator’s estimated illness rate reduction, oyster industry members would have to fully comply with the time and temperature controls. Our discussions with FDA, state officials, and oyster industry representatives, however, suggest that while data regarding compliance levels are unavailable, full compliance is highly unlikely. In January 2011 FDA and the ISSC determined the goal still had not been met. To assess the precision of the risk calculator’s estimates, we replicated and modified a risk simulation model—developed by the World Health Organization and the Food and Agriculture Organization (WHO/FAO) of the United Nations in partnership with FDA—that FDA used as a basis for developing the risk calculator. Our analysis indicates that even with 100 percent compliance, the risk of V. vulnificus illness under time and temperature controls may differ from the number estimated by the risk calculator. For example, in Texas in the month of August, FDA’s risk calculator estimates that time and temperature controls will lead to 2.84 illnesses per 100,000 raw oyster servings. While this is accurate on average, the number of illnesses per 100,000 servings could be as low as 2.44 or as high as 3.63 (for a 90 percent uncertainty interval), according to our analysis. We find a similar range of uncertainty in the estimated number of V. vulnificus illnesses for Florida and Louisiana. See appendix I for more details about our analysis. While uncertainty is an inherent part of estimates produced by all quantitative models, the risk calculator does not report the amount of uncertainty associated with its estimates. Although under the shellfish safety guidelines, states are responsible for enforcing oyster industry compliance with time and temperature controls, senior officials in Florida, Louisiana, and Texas told us they do not track compliance rates. A senior ISSC official confirmed that these states do not systematically collect, analyze, and report compliance information. Enforcement consists largely of periodic state inspections of oyster- processing plants and on-the-water harvester activities. The latter includes checking log sheets on which harvesters record whether they are harvesting oysters for raw consumption and, if so, whether they are complying with various elements of the time and temperature controls. Enforcement personnel in Louisiana, Florida, and Texas told us they do not inspect all harvesting vessels and do not verify the accuracy of all of the information recorded by the harvesters whose vessels they do inspect. For example, a Louisiana official told us that Louisiana enforcement personnel are to check the log sheet to ensure the harvester has recorded the time harvesting began but has no way of verifying whether the information is accurate. Figure 5 shows a sample log sheet used in Louisiana. Harvester Information: BOAT NAME/NUMBER: _____________________________________ HARVESTER NAME/ LICENSE NUMBER: _____________ __________________ DATE:___________ HARVESTER SIGNATURE: Molluscan Shellfish Harvested for Other Than Raw (Half Shell) Consumption: HARVESTING AREA/LEASE NO.: ____________________ PRODUCT INTENDED FOR OTHER THAN RAW CONSUMPTION: (EXPLAIN)_________________________________________________ TIME HARVESTING BEGINS:________________________ TIME HARVESTING ENDS:__________________________ NUMBER OF SACKS OF OYSTERS HARVESTED: ______ Molluscan Shellfish Harvested for Raw (Half Shell) Consumption: HARVESTING AREA/LEASE NO.: ____________________ TIME HARVESTING BEGINS:________________________ NUMBER OF SACKS OF OYSTERS HARVESTED: ______ TEMPERATURE OF COOLER WHEN UNLOADING TIME WHEN LAST OYSTER FROM BOAT ARE PLACED IN TEMPERATURE OF COOLER WHEN LAST OYSTERS FROM THE BOAT ARE PLACED IN COOLER: _______ ORIGINAL CERTIFIED DEALER SIGNATURE___________________ (OR AUTHORIZED REPRESENTATIVE) DATE FDA is responsible for evaluating states’ enforcement of time and temperature controls. However, FDA officials told us that FDA’s evaluations do not include assessments of the degree to which states are ensuring industry compliance. Instead, FDA officials told us their evaluations consist of checking states’ V. vulnificus risk management plans to ensure the plans include the time and temperature controls outlined in the shellfish safety guidelines, accompanying state officials on selected oyster-processing-plant inspections and on-the-water patrols, and reviewing selected shellfish safety plans and records. A senior ISSC official told us the ISSC planned to evaluate the effectiveness of the new time and temperature controls, in part, based on the rate of oyster industry compliance and the level of states’ enforcement. However, because FDA, the ISSC, and states did not collect any industry compliance or state enforcement data, when it came time to conduct the evaluation in January 2011, the ISSC had to rely on testimonial evidence from state officials regarding the extent of industry compliance and state enforcement. Although data are unavailable regarding oyster industry compliance with time and temperature controls, our discussions with state officials and oyster industry members suggest full compliance is highly unlikely. During several discussions with state officials and oyster industry members, we were told of instances of intentional mislabeling, a form of seafood fraud. For example, harvesters initially labeled oysters harvested without meeting the new time and temperature controls for shucking or postharvest processing only but later mislabeled them for raw consumption. Figure 6 shows sample labels for oysters to be consumed raw and for oysters to be shucked or postharvest processed. According to two large oyster processors we spoke with operating in both Louisiana and Texas, mislabeling is widespread and is driven by a considerable financial incentive to avoid the costs of complying with the time and temperature controls and obtain the higher price accorded raw oysters. A senior Florida regulatory official told us that mislabeling was identified during a recent routine inspection of a local oyster-processing plant and that he was aware of several occasions where oysters were served raw that should have been shucked or postharvest processed because they had not been harvested in compliance with the time and temperature controls. In July 2010, the ISSC sent a letter to member states informing them of deaths traced to raw consumption of oysters that should have been shucked or postharvest processed and requesting immediate action to ensure accurate labeling. According to a senior Louisiana law enforcement official, however, mislabeling is an easy practice to engage in and is very difficult for regulatory and law enforcement personnel to detect. During a January 2011 ISSC meeting, ISSC members acknowledged that compliance with the time and temperature controls was not as good as it should be. According to the meeting minutes, there have been numerous complaints from oyster processors regarding instances of noncompliance in Florida. At the January 2011 meeting, the ISSC passed a motion encouraging increased enforcement of the time and temperature controls by the Gulf Coast states. As of March 2011, however, the ISSC was unable to tell us what specifically they meant by increased enforcement or how the states planned to implement the motion. A senior FDA official told us that this motion is unlikely to be implemented in any meaningful way given limited state enforcement capacity. Given that compliance data are unavailable and that compliance rates are likely to be less than 100 percent, according to FDA, state officials, and oyster industry representatives, we used our modification of the WHO/FAO risk simulation model to estimate the effect of the compliance rate on the effectiveness of time and temperature controls in reducing V. vulnificus illness. Specifically, we estimated the number of illnesses during the summer months under the baseline scenario—in which the new more stringent 2010 time and temperature controls were not in effect—and under scenarios that assumed various levels of compliance with the new time and temperature controls. Our estimates show that the extent to which the new time and temperature controls would reduce V. vulnificus illnesses varies considerably with the level of compliance. For example, during a typical August month in Louisiana, assuming that 100 percent of oysters are harvested in compliance with time and temperature controls, the risk calculator estimates the controls will reduce illnesses by 41 percent on average, and our analysis estimates that illness reduction could range from 30 percent to 47 percent. As shown in figure 7, at lower levels of compliance, the illness reduction would be considerably smaller. If 80 percent of the oysters are harvested in compliance with these controls— meaning that 20 percent would be harvested out of compliance—we estimate that time and temperature controls would reduce illnesses by 15 percent to 23 percent. As a result, even assuming 80 percent compliance in the summer months, it is unlikely that these controls will lead to the level of illness reduction estimated by the risk calculator. We found that noncompliance would have a similar effect in the other summer months and in the other states. See appendix I for details. According to a March 2011 FDA-commissioned report by RTI, the Gulf Coast oyster industry does not currently have adequate capacity to use postharvest processing on all Gulf Coast oysters intended for raw consumption that are harvested during warmer months. The report found that two key issues need to be addressed to develop adequate capacity, including the construction of several central postharvest processing facilities. The report concluded that it would take at a minimum 2 to 3 years to develop the necessary capacity. However, we identified six issues of concern regarding the RTI report’s economic analysis that call into question the completeness of its cost and timeline estimates. In October 2009, FDA announced its intent to begin requiring postharvest processing, in part, because it believed that adequate capacity existed. RTI’s March 2011 FDA-commissioned report, however, found that adequate capacity does not exist and identified two key issues that must be addressed to ensure such capacity. First, about five or six central postharvest processing facilities would be needed to accommodate smaller Gulf Coast oyster processors that may be unable to conduct postharvest processing at their current facilities due to various limitations. For example, these smaller facilities generally lack sufficient floor space for installing postharvest processing equipment without undergoing costly plant expansion, and their owners may lack the financial resources to expand their plants and purchase postharvest processing equipment. In addition, the report described several necessary steps in developing the central facilities, including: (1) determining the legal and operating structure of the facilities, (2) identifying the property where the newly constructed facilities are to be located or existing buildings are to be modified, and (3) securing the financing for developing the facilities. While central facilities may allow some smaller oyster processors access to postharvest processing facilities during the warmer months, other challenges remain, such as the additional costs to transport oysters— refrigerated—to and from the central facilities. Second, technical and financial assistance to several processing facilities would be needed to expand or alter their existing facilities, and purchase and install additional postharvest processing equipment. Again, the report describes several steps that must occur before initiating the expansion of existing facilities, such as developing plans for expanding the plant or altering the plant layout, and securing financing for purchasing additional equipment and constructing the expanded facility. Overall, the RTI report concluded that it will take a minimum of 2 to 3 years and, depending on the postharvest processing method used, about $6 million to $32 million in initial investment costs (excluding land purchase and construction costs for new centralized facilities) to develop the infrastructure required to ensure the Gulf Coast oyster industry has adequate capacity to use postharvest processing on all Gulf Coast oysters intended for raw consumption that are harvested during warmer months. In our July 2001 report on shellfish safety, we raised the concern that if the 60 percent V. vulnificus illness rate reduction goal was not achieved by 2008, postharvest processing capacity may not be available because the ISSC did not have a detailed plan for ensuring such capacity. Consequently, we recommended that FDA work with the ISSC to prepare and implement a detailed plan for developing adequate postharvest processing capacity to help achieve the ISSC’s V. vulnificus illness rate reduction goals. In its response, FDA agreed with our recommendation, and the ISSC agreed that it did not have a detailed plan to ensure postharvest processing capacity. At that time, ISSC officials said that the matter was a high priority and would be addressed at its upcoming July 2001 meeting. At the July 2001 meeting, the ISSC proposed that the V. vulnificus risk management plans include a process for implementing a required postharvest treatment capacity for 50 percent of all oysters intended for the raw consumption market—during the months of May through September—should the 40 percent illness reduction goal not be achieved by December 31, 2006. In 2003, the ISSC surveyed oyster dealers with postharvest processing capabilities in Florida, Louisiana, and Texas and found that there was sufficient capacity to use postharvest processing on 100 percent of the oysters harvested from May through September that were intended for raw consumption. According to FDA officials, until January 2011, when RTI presented its preliminary results, they believed there was sufficient capacity to use postharvest processing on all Gulf Coast oysters harvested from May through September. However, according to an ISSC official, the 2003 survey had major limitations such as quick freezing as a postharvest processing option, not considering the location of existing postharvest processing facilities, and not addressing whether existing facilities would treat their competitors’ oysters. The RTI report indicated that quick freezing is not appropriate for oysters harvested in warmer months because this option substantially reduces their quality. The ISSC official said that in hindsight, FDA and the ISSC did not adequately define capacity in 2001 when they began to discuss postharvest processing capacity goals. FDA stated in October 2009 that postharvest processing should be required beginning in May 2011, in part, because it believed that adequate capacity existed. When the ISSC raised concerns, FDA tasked RTI with analyzing the feasibility and economic impacts of such a requirement. Although we believe that the overall method RTI used for its analysis is credible, its conclusion—that postharvest processing capacity to treat all Gulf Coast oysters intended for raw consumption that are harvested from April through October can be developed in a minimum of 2 to 3 years—is questionable due to six issues of concern we identified in RTI’s economic analysis. We recognize that some of the issues we identified are the result of constraints faced by RTI, such as not being within the scope of the FDA-approved RTI report work plan, a lack of data, and the associated contractual report due dates (i.e., FDA needed the report completed before the 2011 summer oyster harvest season to help inform policy decisions). The six issues of concern are as follows.  Baseline data may not be representative of the industry. The RTI report relied on 2008 data—such as oyster harvest volumes, oyster prices, and the number of Gulf Coast oyster processors—as a representative baseline to estimate economic impacts of a postharvest processing requirement. We believe the 2008 data are not necessarily representative of the current state of the Gulf Coast oyster industry due to the events that occurred in 2010—the Deepwater Horizon oil spill and the implementation, on May 1, 2010, of the new, more stringent time and temperature controls. The lead author of the RTI report explained that using the 2008 data as a baseline was appropriate because 2008 was the most recent and complete year of data. Furthermore, the lead author said that it could take several years for the oyster industry to adjust to the 2010 events and that waiting for this adjustment to occur would not necessarily change the overall report’s conclusions regarding the economic impacts of postharvest processing. However, the lead author acknowledged that using 2008 data was a limitation of the study and that using a baseline after the 2010 events would allow for more refined estimates. We believe the estimates in the report may be of limited use for determining how the market would respond to a postharvest processing requirement because the estimates are premised on the oyster industry’s structure prior to the 2010 oil spill and implementation of the new time and temperature controls, which may not reflect the Gulf Coast oyster industry of the future. For example, oyster production was severely curtailed in 2010 compared with the baseline production in 2008. According to the Louisiana Department of Wildlife and Fisheries, the Louisiana oyster harvest was down by 50 percent in 2010. Given the baseline used, the results of the economic impact analysis may not provide a valid basis for the oyster-processing industry to make investment decisions if a postharvest processing requirement is implemented.  Key costs are excluded. Certain key costs are excluded from the report’s economic analysis. For example, the report does not include information on costs associated with purchasing land needed to expand existing postharvest processing facilities or construct new centralized facilities. The lead author of the RTI report said that land costs vary significantly by location. According to the lead author, a detailed analysis of such costs was not within the scope of the FDA- approved RTI report work plan. Although we agree that such costs are highly variable across regions, we believe that including a mean or median land cost would be better than omitting land costs altogether, as such costs may account for a large portion of the total costs to expand existing or construct new facilities. In addition, other significant costs are excluded from the economic analysis, such as construction costs for the new centralized facilities, insurance coverage for additional processing plant space and postharvest processing equipment, and costs for transporting oysters to and from the central postharvest processing facilities. The report acknowledges that insurance coverage may be a significant expense, especially in areas prone to severe weather and flooding. Furthermore, according to the report, processors will incur transportation costs if they are unable to install processing equipment at their facilities and instead have to rely on centralized facilities. Transportation costs would include either paying for trucking services or purchasing and operating a refrigerated truck. According to the lead author, these costs were not included because a detailed analysis of such costs was not within the scope of the FDA-approved report work plan. If key costs are not analyzed and included in the cost estimates, the full scope of the financial resources needed to ensure the Gulf Coast oyster industry has sufficient capacity to use postharvest processing on oysters harvested during the warmer months will not be known.  Who would pay to expand processing capacity is not clear. The RTI report does not clearly address who would pay for postharvest processing, which includes purchasing and installing the equipment, as well as transporting harvested oysters to and from postharvest processing facilities. The lead author of the RTI report said inquiring about possible financial sources available for subsidizing the expansion of postharvest processing capacity was beyond the scope of the report. However, she suggested that expansion could be subsidized by an entity within state government or by an oyster industry cooperative that was established to develop a financing mechanism. In addition, she said the ISSC could take the lead in coordinating the development of the financing mechanisms needed to expand postharvest processing facilities. We believe that identifying financial support is a major issue in assessing the feasibility of requiring postharvest processing, particularly considering state government budget constraints and the financial losses the oyster industry incurred as a result of the 2010 Deepwater Horizon oil spill and the ongoing effects of the recent economic recession. Difficulties in obtaining financing could impact the time frame for postharvest processing to become operational, and therefore the minimum 2 to 3 year estimate for increasing capacity might not be reliable.  Limited support exists for estimated time frame for increasing postharvest processing capacity. According to the RTI report, existing processors would need a minimum of 2 years to increase their postharvest processing capacity; however, the report does not describe in detail the basis for the 2-year estimate. In addition, according to the report, it will take at least 3 years to develop the centralized postharvest processing facilities. The lead author of the report said that the estimates were based, in part, on information obtained from surveying Gulf Coast processors. However, the lead author acknowledged that few processors contributed cost information associated with purchasing, installing, and operating postharvest processing equipment because this type of information is proprietary. We recognize the proprietary nature of the cost data, but we believe the basis for RTI’s time frame estimates could be more transparent. For example, the report could provide specific time frames associated with the steps the report says are required to increase postharvest processing capacity. Absent such transparency, it is difficult to know whether the estimate is well supported and likely to be accurate.  Assumptions about postharvest processing for oysters shipped within state borders are likely inaccurate. The RTI report’s economic impact analysis assumes that three Gulf Coast states—Florida, Texas, and Louisiana—would require postharvest processing for oysters harvested in the warmer months that are intended for raw consumption and sold within the state’s borders. However, statutes passed in 2011 in both Louisiana and Texas state that federal regulations that prohibit the interstate sale of oysters without postharvest processing do not apply to oysters harvested and sold within the state. By not incurring the added cost of postharvest processing, these oysters would affect overall oyster prices. The lead author of the RTI report agreed that the availability of cheaper nonpostharvest processed raw oysters might significantly constrain the ability of retailers and restaurateurs, for example, to sell the higher-priced postharvest processed oysters. Although the RTI report’s analysis includes a range of assumptions on the likely proportion of oysters sold within or outside of a state’s borders, these assumptions are not incorporated in the economic impact model. Incorporating them is important because they provide oyster processors with important information on whether they should make investments in postharvest processing equipment. For instance, if some state regulations allow the sale, within state borders, of oysters intended for raw consumption without postharvest processing, certain processors may decide not to sell oysters outside their state to avoid the cost of postharvest processing equipment, which would place competitive pressure on all oyster prices. We believe that without including a range of assumptions about the proportion of oysters likely to be sold both within and outside of a state’s borders, the overall economic impacts, including the likelihood of oyster processors investing in postharvest processing capacity, will not be fully known.  Postharvest processing costs may not be able to be passed on to consumers. The RTI report also assumes that oyster processors can pass on some of their postharvest processing costs to consumers. The studies cited in the report indicate there is no clear consensus on whether any of the postharvest processing costs could be passed on to consumers or, if the costs could be passed on, what the amount would be. These studies generally found that consumers preferred raw unprocessed oysters to postharvest processed oysters, and although some were willing to accept postharvest processed oysters, they were not necessarily willing to pay a higher price for them. The lead author of the RTI report agreed that the report’s assumption that oyster processors can pass on some of their postharvest processing costs to consumers is uncertain. Without the ability to pass on their higher costs to consumers, many of the current oyster-processing establishments could face closure because, with the addition of postharvest processing costs, their total costs may exceed their returns. Also, oyster harvesters who depend on these processors may have to stop harvesting during the warmer months or quit harvesting altogether. Without an analysis that provides a range of estimates for the price increase that could be passed on to consumers, the Gulf Coast oyster industry will not have sufficient information to help determine whether postharvest processing is economically feasible. It has been nearly 2 years since FDA informed the ISSC that the current V. vulnificus illness rate reduction goal does not sufficiently protect public health. However, since then, FDA and the ISSC have not come to agreement on what an appropriate V. vulnificus illness reduction goal should be or on the best strategy to achieve such a goal. In the absence of such agreement, it will be very difficult for FDA and the ISSC to make progress in reducing the number of V. vulnificus illnesses. In addition, the approach FDA and the ISSC use to measure progress in reducing V. vulnificus illnesses has three main limitations that undermine its credibility. For example, the approach is based on data from only four states, including California, which has had nearly zero consumption- related V. vulnificus illnesses since it began requiring postharvest processing of Gulf Coast oysters in 2003. Consequently, the FDA and ISSC measurement approach does not provide a credible representation of the Gulf Coast or national impact of V. vulnificus illnesses or the real status of their efforts to reduce them. Since 2001, FDA, the ISSC, and the Gulf Coast states have relied on consumer education and time and temperature controls to reduce V. vulnificus illnesses, but neither FDA nor the ISSC has routinely evaluated whether these strategies have been effective in reducing V. vulnificus illness. Our analysis shows that the extent to which the new time and temperature controls will reduce V. vulnificus illnesses varies considerably with the level of compliance. Without regular evaluations of these illness reduction strategies, FDA, the ISSC, and state officials and policymakers have no way of knowing whether either strategy has been successful and should be continued or is ineffective and should be stopped, which can result in wasted resources and a failure to reach policy goals. Finally, FDA has concluded that because consumer education and time and temperature controls have not resulted in achievement of the 60 percent illness rate reduction goal, Gulf Coast oysters harvested during the warmer months and intended for raw consumption should be postharvest processed to reduce V. vulnificus to nondetectable levels. However, the 2011 FDA- commissioned RTI report found that adequate capacity to use postharvest processing on all Gulf Coast oysters harvested from April through October that are intended for raw consumption does not currently exist and is at best 2 to 3 years away. Furthermore, our review of the report’s economic analysis found several issues that the report did not thoroughly address, which could significantly impact the feasibility of developing adequate postharvest processing capacity specified in the FDA-commissioned report. To better ensure the safety of oysters from the Gulf of Mexico that are sold for raw consumption, we recommend that the Secretary of Health and Human Services (HHS) direct the Commissioner of FDA to work with the ISSC to take the following four actions:  Agree on a nationwide goal for reducing the number of V. vulnificus illnesses caused by the consumption of Gulf Coast raw oysters and develop strategies to achieve that goal, recognizing that consumer education and time and temperature controls have not resulted in achievement of the 60 percent V. vulnificus illness rate reduction goal and that the capacity to use postharvest processing on Gulf Coast oysters harvested from April through October that are intended for raw consumption does not currently exist.  Correct the limitations in the current approach to measuring progress toward the 60 percent V. vulnificus illness rate reduction goal or design and implement a new approach without these limitations.  Regularly evaluate the effectiveness of V. vulnificus illness reduction strategies, such as consumer education and time and temperature controls, to determine whether they are successful and should be continued or are ineffective and should be stopped.  Conduct further study of the six issues of concern we identified regarding the RTI report’s economic analysis to ensure a more accurate assessment of the feasibility of developing adequate capacity and before FDA and the ISSC move forward with revising the National Shellfish Sanitation Program’s shellfish safety guidelines to provide postharvest processing for oysters harvested from Gulf Coast waters during warmer months and intended for raw consumption. We provided a draft of this report to HHS and the ISSC for review and comment. In written comments, which are included in appendix II, HHS provided FDA responses, which generally agreed with the report’s four recommendations. Specifically, FDA agreed with our first and second recommendations. Regarding our third recommendation, FDA agreed that the approach used to evaluate the effectiveness of illness reduction strategies has limitations that undermine its credibility. FDA also said that assessing the effectiveness of existing controls on illness reduction is extremely difficult. In an effort to better monitor compliance with time and temperature controls, FDA intends to take a number of steps, including conducting annual on-site checks at oyster landing sites and processing plants to examine compliance with V. vulnificus Hazard Analysis and Critical Control Point controls, harvester records, time and temperature logs, and actual product temperature. We recognize that assessing the effectiveness of V. vulnificus illness reduction strategies is difficult, but continue to believe it would useful for FDA and the ISSC to attempt to do so, because without such evaluations it is difficult to determine whether the strategies are successful and should be continued or are ineffective and should be stopped. Concerning our fourth recommendation, which identified six issues of concern in the FDA-commissioned report on postharvest processing capacity, FDA agreed to conduct further study or take other actions to address our concerns on four issues—key costs are excluded; who would pay to expand processing capacity is unclear; support for estimated time frame for increasing postharvest processing capacity is limited; and assumptions about postharvest processing for oysters shipped within state borders are likely inaccurate—but disagreed with one issue and neither agreed nor disagreed with the other issue. FDA disagreed with our assessment that the 2008 baseline data used in the study may not be representative of the Gulf Coast oyster industry. Furthermore, FDA said that use of 2010 data would not have represented a typical harvest year because the Deepwater Horizon oil spill resulted in closures of many Gulf Coast oyster harvest areas, thereby reducing oyster harvest levels. However, we did not suggest that 2010 data be used for the baseline; instead we believe it is preferable to use a baseline from either an average of several years or a sensitivity analysis of alternative baselines, including one that incorporates data for 2010, the year of the Deepwater Horizon oil spill and the implementation of the new, more stringent time and temperature controls. FDA did not agree or disagree with our assessment that postharvest processing costs may not be able to be passed on to consumers. Instead, FDA stated that there are many uncertainties regarding whether the cost of postharvest processed oysters can be passed on to consumers. We believe the FDA- commissioned report’s analysis could be improved by providing a range of postharvest processing cost estimates that can passed on to consumers, which would help the oyster industry determine the extent to which postharvest processing is economically feasible. FDA also provided technical comments, which we incorporated as appropriate. The ISSC stated in its written comments—which are included in appendix III—that it generally agreed with the recommendations in the report. The ISSC also provided additional information on the FDA and ISSC efforts to address V. vulnificus illnesses and the circumstances that led to the implementation of the current V. vulnificus illness reduction strategies. Also, the ISSC commented that the goal of the V. vulnificus risk management plans was to reduce V. vulnificus illnesses nationally and that four states—California, Florida, Louisiana, and Texas—were used to measure effectiveness. However, even though the ISSC states that the 60 percent illness rate reduction goal is a national goal, it determined achievement toward a national goal by calculating the rate of illness for those four states. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of Health and Human Services, the Executive Director of the Interstate Shellfish Sanitation Conference, and other interested parties. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. To estimate the impact of time and temperature controls on the number of illnesses from Vibrio vulnificus (V. vulnificus), we took several steps. We replicated a model developed by the World Health Organization and the Food and Agriculture Organization (WHO/FAO) that simulates the risk of illness from V. vulnificus based on several factors, such as water temperature and the number of hours that harvested oysters are left unrefrigerated. We then modified the model to simulate the impact of the time and temperature controls implemented in Florida, Louisiana, and Texas in May 2010. Specifically, we examined the amount of uncertainty in the model’s estimates of the risk of illness and the impacts of various levels of compliance with time and temperature controls on the estimated number of V. vulnificus illnesses. The WHO/FAO, in partnership with the Food and Drug Administration (FDA), developed a risk simulation model that estimates levels of V. vulnificus in raw oysters and the subsequent impact of these levels on the risk of illness. The risk simulation model was presented in a WHO/FAO report on the assessment of risk of V. vulnificus in raw oysters. To estimate the impact of time and temperature controls on the risk of V. vulnificus illnesses, we replicated this model and modified it to account for potential changes to harvesting and storage practices in response to the imposition of time and temperature controls and to analyze various rates of compliance with these controls. The WHO/FAO risk simulation model is a Monte Carlo simulation, a type of numerical analysis that produces a range of estimates to account for the natural variability in the model’s data inputs and the statistical uncertainty in the parameters of the model’s equations. Data inputs, such as water temperature, vary naturally from day to day within a month and from year to year for a given month. Similarly, the parameters in the model’s equations that estimate V. vulnificus levels based on data inputs and that predict the risk of illness based on estimated V. vulnificus levels, while based on scientific studies, are subject to statistical uncertainty. To account for the variability in data inputs and the uncertainty in the parameters of the model’s equations, the WHO/FAO risk simulation model calculates a range in possible estimates of risk, each using slightly different values of the data inputs and slightly different values of the parameters. To produce a single estimate of the risk of illness, the model first estimates levels of V. vulnificus at each of four stages in the production process—from harvest to first refrigeration to cooldown to consumption. The model then estimates the overall risk of illness based on the estimated levels of V. vulnificus at the four stages. Figure 8 illustrates each of the stages of the model and the factors that influence them. In this figure, the light gray boxes represent the input factors, the black boxes represent calculations based on those factors, and the arrows indicate which factor influences which calculation. For example, water temperature, air temperature and the number of hours that oysters are left unrefrigerated are input factors that influence the level of V. vulnificus in oysters at the time they are first refrigerated. The dark gray box and the dotted arrows represent our modification of the WHO/FAO risk simulation model. Specifically, we modified the way in which the model determines the number of hours that oysters are left unrefrigerated, which will then impact the level of V. vulnificus in oysters at the time of cooldown. Finally, because V. vulnificus has been found to stop growing and to begin dying off when refrigerated at 55 degrees Fahrenheit or below, the number of days oysters are refrigerated affects the V. vulnificus level at the time oysters are consumed, which affects the number of V. vulnificus illnesses that are likely to occur. To replicate the WHO/FAO risk simulation model, we took several steps. We reviewed the WHO/FAO risk assessment and documented the model’s key data inputs, assumptions, and equations. We asked FDA modelers, who led the development of the WHO/FAO risk simulation model, to review our documentation, and we revised our version of the model based on their comments. We programmed the model in Statistical Analysis Software, generated preliminary estimates, and asked FDA modelers to review these estimates, and we compared these estimates to identify remaining differences between our version of the model and the version used in the WHO/FAO risk assessment. We used the same data inputs as reported in the WHO/FAO risk assessment, including the statistical distributions of water temperature, the difference between water temperature and air temperature, the number of hours that oysters are unrefrigerated, the number of hours until oysters cool down to 55 degrees Fahrenheit, and the number of days that oysters remain in refrigeration. We also used the same values as the WHO/FAO risk assessment for parameters that convert these data inputs into the model’s estimates, such as the parameters that define the relationship between water temperature and V. vulnificus levels at harvest and the parameters that define the relationship between V. vulnificus levels at consumption and the risk of illness. To verify that we correctly replicated the WHO/FAO risk simulation model, we compared our estimates to the estimates reported in the WHO/FAO risk assessment for each of the four seasons. Using the same data inputs, model parameters, and assumptions, our estimates of the risk of illness differ from the estimates reported in the WHO/FAO risk assessment by less than 1 percent in the spring, less than 1 percent in the summer, 4 percent in the fall, and 42 percent in the winter. We report only our estimates from the summer months because the risk of V. vulnificus illness is greatest during these months and because estimates from our model are most similar to the estimates from the WHO/FAO model during these months. See table 2 for a comparison between the WHO/FAO and GAO estimates for key stages of the model for the summer months. After verifying that we replicated the WHO/FAO risk simulation model, we modified it to simulate the impact of time and temperature controls in Florida, Louisiana, and Texas. Since time and temperature controls are specific to each state and each month, we modified the model to provide the estimated risk of V. vulnificus illness for each state and each month. In particular, we used monthly, rather than seasonal, parameters that were reported in the WHO/FAO risk assessment to estimate water temperature and the difference between water temperature and air temperature. In addition, the WHO/FAO risk assessment reported parameters for the distribution of the number of hours that oysters are left unrefrigerated separately for Louisiana and for the rest of the Gulf Coast states. We applied these parameters to our simulation, using one set of parameters for Louisiana and another set of parameters for Florida and Texas. The WHO/FAO risk assessment model’s estimates are based on the average of 100 samples of 10,000 observations each. To provide more reliable uncertainty intervals for these estimates, our modification of the WHO/FAO model uses 1,000 samples. Effective May 1, 2010, Florida, Louisiana, and Texas implemented new, more stringent time and temperature controls that specify (1) the maximum number of hours that oysters are allowed to be unrefrigerated after being harvested and (2) the maximum number of hours before refrigerated oysters must cool down to 55 degrees Fahrenheit. The new more stringent controls established by the three states for 2010 and incorporated in their risk management plans are presented in table 3 for each state and each month. These controls are stricter during the warmer months when V. vulnificus bacteria multiply more quickly. In August, for example, Louisiana and Texas have the most restrictive controls for the time oysters could remain unrefrigerated, allowing 1 hour from harvest until refrigeration, and Florida has the most restrictive controls for the time until refrigerated oysters must cool down, allowing 2 hours from when they are first refrigerated until they reach 55 degrees Fahrenheit. For the purpose of this analysis, we define the baseline scenario as the risk of illness in the absence of the new, more stringent time and temperature controls. Under the baseline scenario, the WHO/FAO risk simulation model assumes a certain statistical distribution in the number of hours that oysters ordinarily would be left unrefrigerated. The values in the statistical distribution, which is based on assumptions in the risk simulation model, range from 1 hour to 10 or more hours, depending upon the state and the month, and specifies the percentage of oysters that ordinarily would be left unrefrigerated for any given number of hours within this range. Based on this statistical distribution, we estimated the percentage of oysters that ordinarily—that is, in the absence of the new, more stringent time and temperature controls—would be refrigerated within the maximum number of hours established by time and temperature controls for each state and each month. In states and months with the least stringent controls, a majority of oysters ordinarily would be refrigerated within these time limits, even in the absence of these time and temperature controls. For example, in Florida during the three summer months, according to the assumed statistical distribution, approximately 85 percent of oysters harvested ordinarily would be refrigerated within the 6-hour limit established by the new, more stringent time and temperature controls. By contrast, in states and months with the most stringent controls, fewer oysters harvested would ordinarily be refrigerated within these limits. For example, in Louisiana and Texas in August, virtually none of the harvested oysters ordinarily would be refrigerated within the 1-hour limit established by these time and temperature controls, according to the assumed statistical distribution. To simulate compliance with the maximum number of hours that oysters are allowed to be unrefrigerated under applicable time and temperature controls, we make the following assumptions about the behavior of oyster harvesters. First, harvesters that ordinarily—that is, under the baseline scenario—would leave oysters unrefrigerated for less than the maximum number of hours would continue to leave them unrefrigerated for the same number of hours that they ordinarily would have. Second, harvesters who ordinarily would leave oysters unrefrigerated for more than the maximum number of hours, and who decide to change to comply with time and temperature controls, would leave oysters unrefrigerated for no more than the maximum allowed number of hours. Third, harvesters who ordinarily would leave oysters unrefrigerated for more than the maximum number of hours, but who decide not to change to comply with time and temperature controls, would continue to leave oysters unrefrigerated for the same number of hours that they ordinarily would have. Similarly, to model the impact of compliance on the number of hours until oysters reach the desired 55 degrees Fahrenheit, we assumed that producers would facilitate more rapid cooling so that oysters would take no longer than the maximum number of hours to cooldown. Using these assumptions, we developed 10 compliance scenarios for each state and each month. These scenarios correspond to estimated compliance rates of 10 percent through 100 percent in increments of 10. Under these scenarios, the model first estimates the percentage of oysters that ordinarily would be refrigerated within the maximum number of hours established by time and temperature controls for each state and each month. To obtain a given compliance rate, the model calculates the additional percentage of oysters that would need to be refrigerated within the maximum number of hours by time and temperature controls to reach a given rate of compliance with regard to the maximum time allowed to be unrefrigerated. For this additional percentage of oysters, the model assumes that oysters would be refrigerated within the maximum number of hours allowed by the controls for that state and month. In Florida during the three summer months, for example, 85 percent of oysters are assumed to be refrigerated within the 6-hour limit in the absence of time and temperature controls, based on the assumed statistical distribution. To attain a 90 percent compliance rate, the model would select the additional 5 percent of oysters, from among the 15 percent that exceed the limit, and would assume that these oysters would be unrefrigerated for no longer than 6 hours. Since actual compliance rates are unknown, these calculations allow us to estimate the number of hours that oysters would be unrefrigerated assuming various compliance rates. The three states used FDA’s risk calculator and their own input data, including water temperature and air temperature for each month, to establish the specific limits for time and temperature controls. The risk calculator, which was developed by FDA, is a simplified version of the WHO/FAO risk simulation model and operates in a computer spreadsheet. It allows the user to estimate the risk of illness from V. vulnificus under various scenarios, such as different limits for the maximum number of hours that oysters can left be unrefrigerated. To determine the estimated number of illnesses per 100,000 servings of raw oysters (i.e., risk of illness) consumed by the susceptible population for each state and month under time and temperature controls, we used FDA’s risk calculator. To make the results of our analysis comparable across the states and consistent with the assumptions of the baseline scenario, we used the input data for water temperature and air temperature from the WHO/FAO risk simulation model, rather than the data used by the states. As a result, our estimates of the risk of illness differ somewhat from the estimates that the states made in using the risk calculator to develop their risk management plans. Unlike the states, we estimated the number of illnesses per 100,000 servings of raw oysters consumed by the susceptible population, rather than the total number of illnesses, because (1) time and temperature controls are designed to affect the risk of illness per serving, not the total number of raw oyster servings consumed and (2) complete state-by-state and month-by-month data on the number of raw oyster servings consumed were not available. The estimated number of illnesses per 100,000 servings for each state and each month, as computed by the risk calculator, is presented in table 4. These estimates represent the number of illnesses that states would expect, based on the risk calculator, as a result of time and temperature controls. We compared these estimated numbers of illnesses, for each state and each month, to the estimates of our modification of the risk simulation model, which accounts for uncertainty in the estimates and for various compliance rates. FDA’s risk calculator estimates the same number of V. vulnificus illnesses as the WHO/FAO risk simulation model, on average. Unlike the risk simulation model, however, the risk calculator does not provide uncertainty distributions associated with these estimates. Using our modification of the WHO/FAO risk simulation model, we computed the amount of uncertainty associated with estimates made by FDA’s risk calculator. In any given month and in any given state, uncertainty in model assumptions may cause the actual number of illnesses to differ from the number estimated by the risk calculator. For example, in Texas in the month of August, FDA’s risk calculator estimates that time and temperature controls will lead to 2.84 illnesses per 100,000 raw oyster servings consumed by the susceptible population. While this is true on average, the number of illnesses per 100,000 servings could vary from the lower bound of 2.44 to the upper bound 3.63 (for a 90 percent uncertainty interval), according to our analysis. We find a similar range of uncertainty in the estimated number of V. vulnificus illnesses for Florida and Louisiana. Table 4 presents our estimates compared with estimates made by the risk calculator, assuming 100 percent compliance with time and temperature controls for the three states during the summer months. Furthermore, we estimate that time and temperature controls would result in a smaller reduction in the number of V. vulnificus illnesses if compliance rates are less than 100 percent. Table 5 shows the estimated reduction in V. vulnificus illnesses as a result of time and temperature controls for various compliance rates for each of the three states during the summer months, based on our modification of the WHO/FAO risk simulation model. For example, during August, assuming that 100 percent of oysters are harvested in compliance with time and temperature controls, our analysis estimates these controls will reduce illnesses by between 16 percent and 27 percent in Florida, between 30 percent and 47 percent in Louisiana, and between 26 percent and 43 percent in Texas. If compliance is less than 100 percent, however, we estimate that these controls will lead to a much smaller reduction in illnesses. As can be seen in table 5, if 90 percent of oysters are harvested in compliance with time and temperature controls—meaning a noncompliance rate of 10 percent—the illness reduction is smaller than the illness reduction under the assumption of 100 percent compliance. For example, in the month of August, we estimate that illnesses would be reduced between 11 percent and 18 percent in Florida, between 21 percent and 32 percent in Louisiana, and between 19 percent and 31 percent in Texas, assuming 90 percent compliance. Furthermore, if 80 percent of oysters are harvested in compliance with these controls—meaning that noncompliance rates are 20 percent—the estimated illness reduction is smaller still. In particular, we estimate that illnesses would be reduced between 8 percent and 14 percent in Florida, between 15 percent and 23 percent in Louisiana, and between 14 percent and 22 percent in Texas. At lower levels of compliance, an even smaller reduction in the number of V. vulnificus illnesses is likely. Because time and temperature controls are less effective at lower compliance rates, the probability that these controls will lead to the illness reduction estimated by the risk calculator is also lower when compliance rates are lower. Table 6 shows the probability that time and temperature controls will reduce V. vulnificus illnesses to the number estimated by FDA’s risk calculator or lower for various compliance rates for each of the three states, based on our risk simulation model. During the summer, assuming that 100 percent of oysters are harvested in compliance with time and temperature controls, as would be expected with the risk calculator’s design, there is between a 43 percent chance and a 55 percent chance that these controls will reduce illnesses to the number estimated by the risk calculator or lower, depending on the state and the month. If compliance is less than 100 percent, however, our analysis shows that it is unlikely that time and temperature controls will reduce illnesses to the number estimated by the risk calculator or lower. As can be seen in table 5, if 90 percent of oysters are harvested in compliance with time and temperature controls—meaning a noncompliance rate of 10 percent—the chances that these controls will reduce illnesses to the estimated number or lower drop substantially when compared with the chances under the assumption of 100 percent compliance. In particular, for the month of August, we estimate that the probability drops from 48 percent to 18 percent in Florida, from 43 percent to 2 percent in Louisiana, and from 43 percent to 4 percent in Texas. Furthermore, if 80 percent of oysters are harvested in compliance with these controls— meaning that noncompliance rates are 20 percent—the likelihood of success is smaller still. In particular, we estimate that the probability that these controls will reduce illnesses to the number estimated by the risk calculator or lower drops to 7 percent in Florida, less than 1 percent in Louisiana, and 1 percent in Texas. Like all quantitative models, our analysis is subject to certain limitations. First, our analysis is subject to all of the limitations to which the WHO/FAO risk simulation model is subject. Though the WHO/FAO model is based on credible scientific studies, uses a valid and reliable methodology, and predicts actual illnesses rates with reasonable accuracy, it is subject to limitations just as all quantitative models are. For example, the model assumes that V. vulnificus levels at harvest are only determined by water temperature, that all strains of V. vulnificus are equally virulent, and that the risk of infection is identical for all members of the susceptible population, though these are simplifications. Furthermore, the exact relationship between levels of V. vulnificus and the observed number of illness is not known, and there are no precise estimates of the size of the susceptible population. Second, our simulations of compliance rates are based on certain assumptions about handling of oysters under the baseline scenario—including the number of hours that oysters would be unrefrigerated and the number of hours until oysters cool down—and on certain assumptions about how producers might respond to time and temperature controls under various compliance scenarios. Since we do not have direct data on actual compliance rates, however, our estimates are only an approximation and cannot be validated against observed data. Third, our estimates of the probability that time and temperature controls will lead to the levels of illness estimated by the risk calculator are approximations and are a function of the data inputs, assumptions, and equations in the risk simulation model. In spite of these limitations, however, we believe our estimates are sufficiently reliable to demonstrate that there is a substantial chance that time and temperature controls will not lead to the number of V. vulnificus illnesses estimated by the risk calculator or lower, especially with less than perfect compliance rates. Appendix II: Comments from the Department of Health and Human Services GENERAL COMMENTS OF THE DEPARTMENT OF HEALTH AND HUMAN SERVICES (HHS) ON THE GOVERNMENT ACCOUNTABILITY OFFICE’S (GAO) DRAFT REPORT ENTITLED, “FOOD SAFETY: FDA NEEDS TO REASSESS ITS APPROACH TO REDUCING AN ILLNESS CAUSED BY EATING RAW OYSTERS” (GAO-11-607) The Department appreciates the opportunity to review and comment on this draft report. Vibrio Vulnificus (V. vulnificus) is a naturally occurring bacterium that can cause a severe and life threatening illness that is fatal about 50 % of the time, generally causing about 15 deaths per year. V. vulnificus is associated with the consumption of raw oysters and characterized by fever and chills, decreased blood pressure (septic shock), and blistering skin lesions. At greatest risk are individuals whose immune systems have been compromised or who have certain health conditions, such as liver, stomach, or blood disorders; cancer; AIDS; diabetes; kidney disease; and chronic alcohol abuse. Effective technologies have been developed that can largely eliminate the hazard of V. vulnificus while producing oysters that retain the sensory qualities of untreated product. These technologies, known as Post Harvest Processing (PHP), include individual quick freezing (IQF) with extended frozen storage, high hydrostatic pressure, mild heat, and low dose gamma irradiation. PHP technologies have proven to be effective in eradicating V. vulnificus associated illness. For example, in 2003, the State of California prohibited Gulf Coast oysters from entering the state during the season of greatest risk unless they had undergone PHP. Once PHP was required in California, the number of deaths in the state fell from 40, between 1991 to 2001, to nearly zero since then. California’s PHP requirement has virtually eliminated the state’s V. vulnificus-related deaths and illness from consuming raw oysters. The Food and Drug Administration (FDA) has collaborated with the Interstate Shellfish Sanitation Conference (ISSC) for years to reduce V. vulnificus illness through improving consumer education and refrigeration practices, but these practices have failed to achieve measurable reductions of V. vulnificus illnesses nationally. FDA has proposed the implementation of PHP, or other equivalent controls, to substantially reduce V. vulnificus illness, but the Gulf Coast industry, state officials, and elected representatives have raised concerns about implementing PHP controls. FDA has considered these concerns and recognizes the need to further examine the timing and processes for oyster harvesters to gain access to PHP facilities or equivalent controls. To that end, FDA commissioned an independent study to assess how PHP or other equivalent controls can be implemented in a safe, efficient, and economic manner and will be addressing the concerns related to that study raised by GAO in an addendum to that study. FDA will continue to collaborate and dialogue with industry, state officials, and the ISSC to explore reasonable and workable approaches to substantially reduce V. vulnificus illness and protect the American people from this painful, deadly and preventable disease. FDA’s responses to GAO’s recommendation are set forth below: GAO Recommendations To better ensure the safety of oysters from the Gulf of Mexico that are sold for raw consumption, we recommend that the Commissioner of FDA work with the Executive Board of the ISSC to take the following four actions: agree on a nationwide goal for reducing the number of V. vulnificus illnesses caused by the consumption of Gulf Coast raw oysters and develop strategies to achieve that goal, GENERAL COMMENTS OF THE DEPARTMENT OF HEALTH AND HUMAN SERVICES (HHS) ON THE GOVERNMENT ACCOUNTABILITY OFFICE’S (GAO) DRAFT REPORT ENTITLED, “FOOD SAFETY: FDA NEEDS TO REASSESS ITS APPROACH TO REDUCING AN ILLNESS CAUSED BY EATING RAW OYSTERS” (GAO-11-607) recognizing that consumer education and time and temperature controls have not resulted in achievement of the 60 percent V. vulnificus illness rate reduction goal and that the capacity to use post-harvest processing (PHP) on Gulf Coast oysters harvested from April through October that are intended for raw consumption does not currently exist; FDA Response The ISSC has attempted to achieve the 60% illness reduction goal that had been established in 2001 through improved refrigeration practices, limited PHP and consumer education, but these efforts have not succeeded. FDA recognizes the efforts that went into these undertakings and will continue to collaborate with the ISSC to find strategies and explore approaches to establishing reasonable and workable goals for reducing V. vulnificus illness and protecting Americans from this deadly and preventable disease. As FDA continues in these efforts the agency remains mindful that effective technologies have been developed that can largely eliminate the hazard of V. vulnificus while producing oysters that retain the sensory qualities of untreated product. correct the limitations in the current approach to measuring progress toward the 60 percent V. vulnificus illness rate reduction goal or design and implement a new approach that does not have the limitations of the current one; FDA Response FDA agrees that the current approach used by the ISSC to count V. vulnificus illnesses and assess illness rate reduction is defective and should be corrected. The evaluation of success of existing control measures is based on counting illnesses reported in four “core” states (CA, LA, TX, FL). For a number of years, FDA has advised the ISSC of concerns with that approach. While the ISSC has claimed some success in its effort to reduce V. vulnificus illnesses, using numbers for the four “core” states, the rate of illness at the national level has remained relatively static. Much of the success claimed by the ISSC is directly attributable to the 2003 California ban on raw, untreated Gulf oysters. That ban virtually eliminated oyster associated V. vulnificus illnesses in California, which previously reported 5 to 6 annually. Continued use of California as a “core” state in the ISSC’s illness counting system biases the calculated illness reduction rate. Even if the ISSC’s 60% goal had been achieved, it is unlikely that a measurable reduction in the rate of illness nationally would have been realized. This presented itself as a significant factor in FDA’s announcement of its intent to revise its policy and issue guidance regarding PHP. FDA wishes to continue working with the ISSC to develop a counting formula that accounts for illness nationally and that realistically defines how effective V. vulnificus control measures are, whatever they include. regularly evaluate the effectiveness of V. vulnificus illness reduction strategies, such as consumer education and time and temperature controls, to determine whether they are successful and should be continued or are ineffective and should be stopped; GENERAL COMMENTS OF THE DEPARTMENT OF HEALTH AND HUMAN SERVICES (HHS) ON THE GOVERNMENT ACCOUNTABILITY OFFICE’S (GAO) DRAFT REPORT ENTITLED, “FOOD SAFETY: FDA NEEDS TO REASSESS ITS APPROACH TO REDUCING AN ILLNESS CAUSED BY EATING RAW OYSTERS” (GAO-11-607) FDA Response FDA agrees with GAO that the approach that has been used to evaluate the effectiveness of illness reduction strategies has limitations that undermine its credibility, including the limited number of states used in determining V. vulnificus illness reduction, and the overstatement of the effectiveness of the primary V. vulnificus illness reduction strategies, consumer education and time and temperature controls—by including V. vulnificus illness data from California. Historically FDA, ISSC and the States have devoted significant resources to conducting V. vulnificus education campaigns. Directed at the consuming public, these activities have been aimed at informing at-risk consumers about the risks of consuming raw molluscan shellfish. Campaign efforts have also targeted health professionals who provide care to at risk individuals, including those with underlying medical conditions, such as liver disease and chronic alcohol abuse While FDA has not undertaken a study to specifically examine the impact of educational programs, there is no indication that they have resulted in any substantial reduction in the occurrence of V. vulnificus illnesses, as evidenced by the relatively static level of illnesses and deaths occurring each year nationally. A survey commissioned by the ISSC in 2004 does not suggest a reduction in the number of at-risk consumers who are consuming raw oysters and there is no evidence of illness reduction at the national level. Furthermore, even though the independent impact of education on the rate of illness cannot be measured, the impact appears to be marginal at best given that the current illness reduction rate (based on 2009 and 2010 data) is only 38.8% in the “core’ states. That rate of reduction is significantly skewed by the use of California as a counting state. FDA has concluded that additional efforts to educate will have little if any beneficial outcome. With regard to assessing the effectiveness of existing controls on illness reduction, it is extremely difficult, and perhaps impossible, to tease out the contribution of one control measure versus another. For that reason, ISSC goals have relied on illness counting to determine their success. Unfortunately, the counting strategy employed by the ISSC is flawed, for reasons previously discussed and pointed out by GAO. Studies to compare V. vulnificus levels in retail oysters subsequent to states’ implementation of time and temperature controls to levels found in previous retail studies may help indentify levels of consumer exposure. However, FDA has no plans to conduct additional studies given existing budgetary and competing priority considerations. One thing that remains clear is implementation of strict time and temperature controls by states has not achieved the ISSC 60% illness rate reduction goal. Nor have these controls resulted in any illness reduction at the national level. Arguments have been put forth suggesting that industry compliance is problematic and that increased effort by states and FDA to enforce compliance is needed. Toward that end, FDA is moving from biennial to annual evaluation of V. vulnificus control plans being used by states and industry. As part of its increased compliance evaluation, FDA will conduct annual onsite checks at oyster landing sites and processing plants to examine compliance with V. vulnificus HACCP controls, harvester records, time/temperature logs, and actual product temperatures. Such efforts will help address concerns that the goal has not been met due to inadequate implementation and enforcement of controls. GENERAL COMMENTS OF THE DEPARTMENT OF HEALTH AND HUMAN SERVICES (HHS) ON THE GOVERNMENT ACCOUNTABILITY OFFICE’S (GAO) DRAFT REPORT ENTITLED, “FOOD SAFETY: FDA NEEDS TO REASSESS ITS APPROACH TO REDUCING AN ILLNESS CAUSED BY EATING RAW OYSTERS” (GAO-11-607) conduct further study of the six issues of concern that we identified regarding the RTI report’s economic analysis to ensure a more accurate assessment of the feasibility of developing adequate capacity and before FDA and the ISSC move forward with revising the National Shellfish Sanitation Program’s shellfish safety guidelines to provide post- harvest processing for oysters harvested from Gulf Coast waters during warmer months and intended for raw consumption. FDA Response The 6 issues of concern identified by GAO are as follows: Baseline data may not be representative of industry; FDA disagrees with the argument that baseline data, upon which the study is premised, is not representative. Data for 2008 are representative of a typical year, in which natural or manmade disasters are not of impact. As such, 2008 serves well as a baseline for what a “normal” year in the Gulf historically represents. Use of data for 2010 would not have represented a typical harvest year due to the Gulf oil spill disaster that reduced harvest levels due to closures in many Gulf Coast harvest areas. This circumstance would have skewed the results, possibly underestimating the impact and cost associated with PHP. Furthermore, to have waited until more recent data was available, and for what would be representative of a “normal” year, would have delayed efforts by FDA to examine the feasibility of PHP. Moreover, according to RTI, the overall conclusions of their study likely would not have changed. Key costs are excluded; FDA recognizes that exclusion of certain costs can and have affected final cost outcomes presented in the RTI report. In an effort to better assess how costs associated with needs such as land purchase, new facility construction, transportation, and insurance, FDA has commissioned additional work to address these cost considerations. Who should pay to expand processing capacity is not clear; FDA recognizes the importance to industry of identifying financing opportunities to consider and tap to partially defray the costs of implementing the PHP. FDA has commissioned additional analysis to be performed by Research Triangle Institute to develop information to fill this gap. FDA does not consider identification of funding opportunities to be principally the responsibility of the Agency. GENERAL COMMENTS OF THE DEPARTMENT OF HEALTH AND HUMAN SERVICES (HHS) ON THE GOVERNMENT ACCOUNTABILITY OFFICE’S (GAO) DRAFT REPORT ENTITLED, “FOOD SAFETY: FDA NEEDS TO REASSESS ITS APPROACH TO REDUCING AN ILLNESS CAUSED BY EATING RAW OYSTERS” (GAO-11-607) Limited support exists for estimated time frame for increasing post-harvest processing The report presents what are considered minimum time frames for meeting the needed PHP capacity and its implementation. As a baseline minimum, it provides FDA with guidance on what the general time frame for full implementation may be. FDA recognizes that there may be additional time needs and constraints. The Agency stands ready to engage the industry and states in dialogue regarding time frames. Assumptions about post-harvest processing for oysters shipped within state borders are FDA and RTI recognize that the study did not consider the possibility of Gulf States allowing for the intrastate sale of untreated oysters. FDA has commissioned additional analysis to be performed by Research Triangle Institute to address this concern. It may be possible that analysis could be done to account for intrastate shipment and sale of oysters for raw half-shell consumption that have not undergone PHP. It may also be possible that costs could be recalculated assuming that private processors would only post-harvest process interstate half- shell oyster shipments. In addition, the economic impact model used to assess the price and quantity effects of PHP requirements could be altered to assume that only interstate shipments of oysters intended for raw half-shell consumption would be post-harvest processed. It has been pointed out however, that to make these alterations to the model would require development of assumptions regarding numerous values in the model given the lack of data (e.g., estimates of the degree to which consumers in each of the Gulf states would substitute between oysters that have and have not been post-harvest processed). Post-harvest processing costs may not be able to be passed on to consumers. PHP Gulf oysters are currently marketed at premium prices, according to the report. However, if PHP becomes the standard for Gulf oysters, the ability to gain premium prices to offset PHP processing costs becomes less likely. There are many uncertainties around the question of price. RTI indicated to FDA that, “If it is indeed the case that none of the costs of PHP could be passed along to consumers, an economic impact model is not an appropriate tool for assessing effects of the PHP requirements because the main purpose of this type of model is to determine the extent to which prices in the market would adjust to a change. In this case, the results of the economic impact model (Section 5.2 of the report) should simply be disregarded, and the closure analysis (Section 5.1 of the report), which includes estimates of the total costs of complying with PHP requirements, should be the focus of the economic analysis.” Mr. Steve Secrist, Assistant Director United States Government Accountability Office Natural Resources & Environment Western Region, San Francisco Office 301 Howard Street, Suite 1200 San Francisco, CA 94105 Thank you for providing the Interstate Shellfish Sanitation Conference (ISSC) an opportunity to review and comment on your draft report entitled, Food Safety: FDA Needs to Reassess Its Approach to Reducing an Illness Caused by Eating Raw Oysters (GAO-11-607). The Executive Board of the ISSC has reviewed the report and their comments have been incorporated into the attached document. FDA has a representative on the Executive Board; however the agency did not participate in this ISSC review. The report focuses on ISSC efforts to reduce Vibrio vulnificus (Vv) related illnesses and deaths. The comments are formatted consistent with the draft report. The ISSC is in general agreement with the recommendations of your report. However, the scope of your investigation did not allow for a review of the history of involvement by ISSC and FDA on this issue. The scope did not allow for a full explanation of the many issues associated with Vv that makes this problem very unique. Regardless, we will continue to work with FDA to develop risk-based, cost effective ways to improve the safety of raw molluscan shellfish. We continue to be committed to reducing illness associated with Vv and will continue our efforts to explore cost effective appropriate measures which can be implemented to address illnesses associated with this naturally occurring Vibrio. The ISSC Executive Board and membership appreciates your efforts in preparation and communication in the development of this report. Your efforts were thorough and the depth of knowledge obtained by your staff is to be commended. Should you have any questions on comments regarding this response, please contact Ken B. Moore, ISSC Executive Director or me at (508) 990-2860 extension 122. J. Michael Hickey, Chairman ISSC Executive Board cc: ISSC Executive Board Members Ken B. Moore, Executive Director ISSC Vibrio Management Committee Members INTERSTATE SHELLFISH SANITATION CONFERENCE COMMENTS ON THE GOVERNMENT ACCOUNTABILITY OFFICE DRAFT REPORT Food Safety: FDA Needs to Reassess Its Approach to Reducing an Illness Caused by Eating Raw Oysters The Interstate Shellfish Sanitation Conference (ISSC) welcomes and appreciates the opportunity to review and comment on the Government Accountability Office’s (GAO) draft report. The ISSC is in general agreement with the four (4) recommendations of the report. Provided below are general comments and specific comments to the report. The National Shellfish Sanitation Program (NSSP) was developed in 1925 when the U. S. Public Health Service responded to a request for assistance from local and state public health officials in controlling typhoid fever and other bacterial diseases associated with the consumption of raw molluscan shellfish (oysters, clams, and mussels). The public health control procedures established by the Public Health Service were dependent on the cooperative and voluntary efforts of State regulatory agencies. These efforts were augmented by the assistance and advice of the Public Health Service (now the Food and Drug Administration ) and the voluntary participation of the shellfish industry. These three parties combined to form a tripartite cooperative program. The guidelines of the program have evolved into the NSSP Guide for the Control of Molluscan Shellfish which is managed and updated by the ISSC. The cooperative nature of the NSSP allows FDA to administer a domestic and international program with a relatively small federal commitment. In the many years since its establishment, the program has proven to be effective in minimizing the reoccurrence of illness associated with bacterial pathogens originating from human waste. The NSSP has also responded and essentially eliminated the occurrence of illness from natural toxins associated with harmful algae blooms. The ISSC, NSSP, and FDA continue to face new challenges in assuring that molluscan shellfish are safe for human consumption. Naturally occurring pathogens, particularly Vibrio parahaemolyticus (Vp) and Vibrio vulnificus (Vv) is one of those challenges we must address. Our commitment has not changed since 1925. The ISSC Vibrio Management Committee is aggressively pursuing effective and appropriate strategies that will address this food safety concern. The ISSC applauds the effort of the GAO to examine the Vv problem. However, the scope of your investigation did not allow for a broader explanation of the uniqueness of the Vv issue. An understanding of the uniqueness is critical for a full understanding of the present controls that exist for addressing Vv illnesses. The controls which have been incorporated into the NSSP since 1987 to address Vv were developed by ISSC and supported by FDA. FDA was fully engaged in the development of many of the approaches. Together, we recognize the limited success of several of our programs. In the late 1980s we agreed that with the small number of illnesses that physician and consumer education was more prudent than regulation. We now know that while education has benefits it will not significantly reduce national Vv illnesses. In 2001 we agreed that if the industry was allowed to process oysters to reduce Vv to non detectable levels and label the product safe that consumers would demand the safer product and the market place would encourage the industry to Post Harvest Process (PHP) oysters. This has not been the case. Consumer demand for PHP product has not created the financial incentive to encourage the majority of the industry to pursue PHP. Your report provides an accurate estimate of the prevalence of Vv illnesses which is approximately 32 illnesses per year. When compared to other food borne illnesses, this number is very small. This number has remained virtually unchanged since the early 1990s. During this period the number of reporting states has nearly doubled. The Centers for Disease Control and Prevention (CDC) reports that due to the severity of the illness, practically all cases are reported. The State Voting Delegates of ISSC, responsible for implementing controls in their respective States, have struggled to identify controls to address a naturally occurring organism that affects only 32 individuals annually. It is also important to note that the general population is not at risk. Vv poses a risk to immuno-compromised individuals. Approximately 7% of the US population is immuno-compromised. Only a small number of that 7% is affected. Most food safety concerns place all consumers at risk. States prioritize resources to address food safety issues which pose the greatest threat of illness to consumers. Implementing regulatory controls which cause industry financial hardship make regulating this problem very problematic. The inability to identify other food safety issues with similar illness burdens that have been regulated with similar costs to an industry has made consensus on this issue difficult. The cost benefit debate on Vv has always been an obstacle for the ISSC in agreeing on controls. Yet the ISSC has continued to be proactive in its efforts to reduce Vv illnesses. The report recommends that the ISSC and FDA agree on an appropriate Vv illness reduction goal. To accomplish this, ISSC and FDA must address the two broad questions: (1) what should be the goal of a public funded regulatory program for addressing a food safety issue which affects 32 persons annually; and (2) to what extent should a program of this type impose economic hardship to the industry. Your report outlines several areas of disagreement between the FDA and ISSC. There is agreement between FDA and ISSC in several areas that provide a foundation for identifying approach for addressing the problem. The FDA and ISSC agree that Vv illnesses pose a health risk which requires public health intervention. There is agreement on the scope of the problem and the ability of known controls to reduce risk. The only major disagreement is the extent of public health interventions that will appropriately address the problem. The extent of the interventions dictates the financial impact to the shellfish industry. The present controls adopted by the ISSC recognize a risk at harvest and are intended to minimize any increase in risk as a result of post harvest growth. Although these controls pose significant fiscal challenges for the industry, states have imposed these controls. The FDA is proposing an approach requiring PHP, which would reduce the levels of Vv post harvest and further reduce the risk. While this approach seems plausible it can not be implemented without financial devastation to the industry (see Research Triangle Institute (RTI) report). The FDA announced in 2009, intentions to reformulate policy to require post harvest processing or equivalent controls. This FDA announcement exacerbated the controversy associated with Vv controls. The cooperative nature of the NSSP requires support from all participants. The announcement of FDA was unilateral and has alienated the industry and states. Since the announcement FDA has been reluctant to engage in discussions regarding Vv goals and strategies to achieve those goals. acceptable risk for at-risk consumers choosing to eat raw molluscan shellfish. For that reason the ISSC firmly supports the recommendations of GAO. Specific Comments to the GAO Report Page 2: “The shellfish safety guidelines also included goals for reducing the rate of illness for four reporting states” ISSC Comments: The goal of the ISSC Vv Management Plan was to reduce illnesses nationally. The four (4) states of California, Florida, Louisiana, and Texas were used to measure effectiveness. These states were chosen because of their history in reporting Vv cases. Page 24: “A senior FDA official told us that this motion is unlikely to be implemented in any meaningful way given limited state enforcement capacity.” ISSC Comments: The FDA is responsible for ensuring compliance. The FDA should not have concurred with ISSC adoption of time temperature controls if there were concerns regarding implementation and compliance. The ISSC expects that the 2011 focused efforts of FDA to evaluate State compliance will result in effective implementation. In addition to the contact named above, Stephen D. Secrist, Assistant Director; Leo G. Acosta, Analyst in Charge; Kevin Bray; Mark A. Braza; Allen T. Chan; Nancy L. Crothers; Barbara J. El Osta; Lorraine R. Ettaro; Mitchell B. Karpman; Anthony R. Padilla; Emmy L. Rhine; Anne O. Stevens; Kiki Theodoropoulos; and Nimish D. Verma made key contributions to this report. Also contributing to this report were Michael D. Derr, Katherine M. Raheb, and Jena Y. Sinkfield.
Vibrio vulnificus (V. vulnificus) is a bacterium that occurs naturally in the Gulf of Mexico. On average, since 2000, about 32 individuals a year in the United States have become ill from eating raw or undercooked oysters containing V. vulnificus, and about half have died. The Food and Drug Administration (FDA) is responsible for ensuring oyster safety and works with the Interstate Shellfish Sanitation Conference (ISSC), which includes representatives from FDA, states, and the shellfish industry to establish guidelines for sanitary control of the shellfish industry. GAO was asked to determine the extent to which FDA and the ISSC agree on the V. vulnificus illness reduction goal, use a credible approach to measure progress toward the illness rate reduction goal, have evaluated the effectiveness of their actions in reducing V. vulnificus illnesses, and whether the Gulf Coast oyster industry has adequate capacity to postharvest process oysters harvested April through October. GAO reviewed data and documents and interviewed officials in FDA, the ISSC, Florida, Louisiana, and Texas. FDA and the ISSC do not agree on a common V. vulnificus illness reduction goal. In October 2009, FDA announced its intention to change its approach to V. vulnificus illnesses from reducing them to largely eliminating them. To do so, FDA would require states to use postharvest processing methods, which include a mild heat treatment known as low temperature pasteurization. FDA's announced approach was a change from the 60 percent illness rate reduction goal established by the ISSC in 2001, with FDA concurrence. In a November 2009 letter to FDA, the ISSC expressed disappointment that FDA had not followed a 1984 memorandum of understanding that calls for FDA and the ISSC to consult on such matters. If FDA and the ISSC are not in agreement on the illness reduction goal and strategies to achieve it, it will be difficult for the Gulf Coast states to move forward to significantly reduce the number of consumption-related V. vulnificus illnesses. The approach FDA and the ISSC have been using to measure progress toward the previously agreed upon V. vulnificus illness rate reduction goal established in 2001 has limitations that undermine its credibility. For example, the ISSC continues to include California's results in its illness rate reduction calculation along with Florida, Louisiana, and Texas. Doing so overstates the effectiveness of consumer education and time and temperature controls--FDA's and the ISSC's primary strategies for reducing V. vulnificus illnesses--because California, unlike these other states, requires that all raw Gulf Coast oysters harvested during the summer and sold in the state be processed to reduce V. vulnificus to nondetectable levels, which has reduced V. vulnificus illnesses to nearly zero. FDA and the ISSC have taken few steps to evaluate the effectiveness of their consumer education efforts since 2004. Likewise, they have not directly evaluated the effectiveness of the time and temperature controls implemented in 2010, which call for harvesters to ensure that oysters are cooled to specific temperatures within certain times to reduce V. vulnificus growth. Although data are not available, our discussions with state and oyster industry officials suggest 100 percent compliance with the controls is highly unlikely. Moreover, our analysis shows--even assuming 80 percent compliance in the summer months-- it is unlikely that these controls will lead to the level of illness reduction estimated by a model developed by FDA. The Gulf Coast oyster industry does not have sufficient capacity to process all of its oysters intended for raw consumption that are harvested from April through October to reduce V. vulnificus to nondetectable levels, according to an FDA-commissioned report. The report concluded that it will take a minimum of 2 to 3 years to develop the infrastructure needed to process these oysters. However, the report has some limitations that call into question the completeness of its cost and timeline estimates. For example, the report's cost estimates did not include some construction costs and costs associated with purchasing land needed to expand existing processing facilities or build new ones. Without this information, the full cost of developing sufficient processing capacity will not be known. GAO recommends that FDA work with the ISSC to agree on an illness reduction goal, improve its approach to measuring progress in reducing V. vulnificus illnesses, regularly evaluate its illness reduction strategies, and address the limitations in the FDA-commissioned report. FDA and the ISSC generally agreed with our recommendations.
We conducted our audit work from November 2001 through February 2002 in accordance with U.S. generally accepted government auditing standards, and we performed our investigative work in accordance with standards prescribed by the President’s Council on Integrity and Efficiency. We briefed officials from the Department of Defense Purchase Card Program Management Office, Naval Supply Systems Command (NAVSUP), assistant secretaries of Navy for financial management (comptroller) and research development and acquisition, SPAWAR Systems Center, and NPWC on the details of our audit, including our objectives, scope, and methodology and our findings and conclusions. We referred instances of potentially fraudulent transactions that we identified during our work to our Office of Special Investigations for further investigation. Our control tests were based on stratified random probability samples of 50 SPAWAR Systems Center purchase card transactions and 94 NPWC transactions. We also reviewed a nonrepresentative selection of transactions using data mining intended to identify potentially fraudulent, improper, abusive, or otherwise questionable transactions. In total, we audited 161 SPAWAR Systems Center and 145 NPWC fiscal year 2001 transactions. Our work was not designed to identify, and therefore we did not determine, the extent of fraudulent, improper, or abusive transactions and related activities. Further details on our objectives, scope, and methodology are included in appendix II. In our follow-up audit, we found that both units had made some improvements in the overall control environment, primarily after the end of fiscal year 2001. However, the control environment at SPAWAR Systems Center continued to have significant weaknesses, while NPWC had made major strides towards a positive control environment. GAO’s Standards for Internal Control in the Federal Government (GAO/AIMD-00-21.3.1, November 1999) state that, “A positive control environment is the foundation for all other standards. It provides discipline and structure as well as the climate which influences the quality of internal control.” Our previous work found that a weak internal control environment at SPAWAR Systems Center and NPWC contributed to internal control weaknesses and fraudulent, improper, and abusive or questionable activity. In July 2001, we testified that the specific factors that contributed to the lack of a positive control environment at these two units included a proliferation of cardholders, ineffective training of cardholders and certifying officers, and a lack of monitoring and oversight. The following sections provide an update on the status of these conditions as well as information on several additional factors that affected the overall control environment at these Navy units. Although both units have reduced the number of cardholders, balancing the business needs of the unit with the training, monitoring, and oversight needed for a substantial number of cardholders remains a key issue. In October 2001, NAVSUP issued an interim change to its existing purchase card instructions to establish minimum criteria that prospective purchase card holders must meet before a purchase card account (including convenience check accounts) can be established in the employee’s name. The interim change issued by NAVSUP also established a maximum “span of control” of 5 to 7 cardholders to each approving official and required that Navy activities establish local policies and procedures for approving and issuing purchase cards to activity personnel. The Navy’s span of control requirement reflects guidance issued by the Department of Defense Purchase Card Program Management Office on July 5, 2001, shortly before the Subcommittee hearing. The revised guidance stated that, generally, an approving official’s span of control—cardholders per approving official— should not exceed a ratio of 7 to 1. Neither of the two units increased the number of approving officials to meet the suggested ratio until well after the start of fiscal year 2002. Table 1 summarizes the progress made by both units. The data in table 1 show that from September 21, 2000, to January 21, 2002, SPAWAR Systems Center had a net reduction in the number of cardholders of 360 (31 percent) and NPWC, 107 (37 percent). In addition, in fiscal year 2002, SPAWAR Systems Center increased the number of approving officials to 203 and NPWC, to 43. As a result, the approving official ratio for SPAWAR Systems Center and NPWC is now in line with DOD’s criterion of no more that 7 cardholders per official. However, as of January 21, 2002, SPAWAR Systems Center still had 23 approving officials who were responsible for more than 7 cardholders and therefore did not comply with the DOD and Navy span of control requirements. SPAWAR Systems Center records show that it significantly reduced the number of cardholders, primarily through canceling cards of those that did not need them and through employee attrition. According to SPAWAR Systems Center officials, some SPAWAR Systems Center purchase cards were canceled because of misuse; however, we were unable to determine from SPAWAR Systems Center records how many of the cards were canceled for this reason. We previously reported that SPAWAR Systems Center had a significant span-of-control issue with one approving official responsible for certifying monthly purchase card statements for all of its cardholders. According to Citibank and SPAWAR Systems Center records, effective for the billing period ending January 21, 2002, SPAWAR Systems Center increased from 1 to 203 the number of approving officials responsible for certifying monthly summary invoices. This change reduced SPAWAR Systems Center’s average span of control to 4 cardholders to each approving official, which is in line with DOD and Navy guidelines. We did not perform any testing for fiscal year 2002 transactions to determine whether the approving officials were in place and performing effective reviews. SPAWAR Systems Center management told us that they are continuing to evaluate the number of cardholders and the impact any further cuts would have on management’s ability to support operations and keep employees working efficiently. NPWC reduced the number of its cardholders through employee attrition and by canceling the cards of individuals who no longer needed them, had not taken required training, or had misused the card. Specifically, on July 6, 2001, the agency program coordinator (APC) gave each business line manager an analysis of monthly purchase card usage data for each of the cardholders under his or her supervision. The business line managers were instructed to analyze cardholder monthly transaction volume and reduce the number of cardholders by eliminating those cardholders they believed no longer needed a purchase card. NPWC also recently increased its number of approving officials from 7 as of September 21, 2001, to 43 by January 21, 2002. This significant increase brought the ratio of cardholders to approving officials in line with DOD and Navy guidelines. Another key factor in minimizing the government’s financial exposure is assessing the monthly credit limits available to cardholders. The undersecretary of defense for acquisition and technology emphasized in an August 2001 memorandum to the directors of all defense agencies, among others, that not every cardholder needs to have the maximum transaction or monthly limit and that supervisors should set reasonable limits based on what each person needs to buy as part of his or her job. We concur with the undersecretary’s statements and continue to recommend that cardholder spending authority be limited as a way of minimizing the federal government’s financial exposure. As shown in table 2, total financial exposure, as evidenced by monthly credit limits for SPAWAR Systems Center and NPWC cardholders, has decreased substantially. SPAWAR Systems Center reduced the overall credit limits of it cardholders by about $29 million primarily by (1) eliminating nearly $10 million of credit assigned to each of two cardholders and (2) reducing the net number of cardholders by 360. As we previously reported, most SPAWAR Systems Center cardholders had a $25,000 credit limit, and no cardholder had a credit limit of less than $25,000. We continue to believe that a $25,000 minimum credit limit is more than most SPAWAR Systems Center cardholders need to perform their mission. This point is best demonstrated by the fact that even when we used SPAWAR Systems Center’s reduced number of cardholders, the average monthly purchase card bill in fiscal year 2001 would have been less than $5,000. As shown in table 2, Citibank’s records indicate that between September 21, 2000, and January 21, 2002, NPWC reduced its cardholder exposure from about $13.5 million to $12.1 million—a $1.4 million reduction. NPWC achieved this reduction primarily by reducing by 107 the number of individuals who had purchase cards and by reevaluating cardholders’ monthly credit limits. We previously reported that most NPWC cardholders were granted a monthly credit limit of $20,000. Currently, about 20 NPWC cardholders have a credit limit of less than $20,000, about 42 percent still have a $20,000 credit limit, and the remaining cardholders have higher credit limits to meet job needs. Further, the average monthly purchase card bill (using the reduced number of cardholders) in fiscal year 2001 for NPWC cardholders would have been about $11,500. On September 7, 2001, the NPWC agency program coordinator distributed spreadsheet analyses of individual cardholder actual monthly and average charges, along with suggested new monthly cardholder limits, to the respective cardholder’s business line managers. The agency program coordinator required the business line managers to respond to the agency program coordinator with new limits for cardholders by the close of business on September 21, 2001. At the exit meeting we held with NPWC officials, NPWC provided Citibank records documenting that NPWC further reduced its cardholder credit limits to $5.6 million in February 2002. In addition to the reductions in the number of cardholders and aggregate financial exposure, the dollar volume of transactions decreased significantly in fiscal year 2001 when compared to fiscal year 2000, as shown in table 3. The NPWC agency program coordinator attributed a portion of this decrease to increased controls over the use of purchase cards, resulting in a reduction in unnecessary and improper card usage. Other reasons were a reduction in the number of projects worked on during fiscal year 2001 and the use of more contracts for goods and services, which are paid by means other than the purchase card. The SPAWAR Systems Center senior military contracting official told us that SPAWAR Systems Center’s reduction in purchase card use is a result of a decrease in workload and an increase in concern over purchase card controls brought on as a result of our audit and the congressional hearing. While both SPAWAR Systems Center and NPWC have taken steps to implement our recommendations regarding cardholder training and proper documentation of training, SPAWAR Systems Center still needs to do more to make sure all cardholders receive required training and to document the training taken by cardholders. We previously reported that the lack of documented evidence of purchase card training contributed to a weak internal control environment at these two units. GAO’s internal control standards emphasize that effective management of an organization’s workforce—its human capital—is essential to achieving results and is an important part of internal control. Training is key to ensuring that the workforce has the skills necessary to achieve organizational goals. In accordance with NAVSUP Instruction 4200.94, all cardholders and approving officials must receive purchase card training. Specifically, NAVSUP 4200.94 requires that prior to the issuance of a purchase card, all prospective cardholders and approving officials must receive training regarding both Navy policies and procedures as well as local internal operating procedures. Once initial training is received, the Instruction requires all cardholders to receive refresher training every 2 years. Further, in response to our previous audit and the July 30, 2001, hearing, NAVSUP sent a message in August 2001 to all Navy units directing them to train all of their cardholders concerning the proper use of the purchase cards on or about September 12, 2001. SPAWAR Systems Center training records indicated that as of January 21, 2002, 146 cardholders either had not completed the NAVSUP-mandated training or had not produced a certificate evidencing completion of the training. In addition, 13 active cardholders had not satisfied the requirement to take refresher training every 2 years. SPAWAR Systems Center officials told us that they intended to suspend the accounts of cardholders who had not taken the required training; however, as of February 15, 2002, the accounts of only 5 cardholders had been suspended. NPWC has taken well-documented steps to provide cardholders and approving officials the necessary training and to assure itself that untrained personnel do not remain purchase card holders. As a result of our previous audit findings in this area, NPWC held mandatory cardholder training sessions in June 2001 and July 2001, which all cardholders and their supervisors attended. In addition, NPWC presented NAVSUP-prepared training for all cardholders and approving officials in September 2001. The mandatory NAVSUP training addressed the issues of receipt and acceptance, spending limits, accounting, unauthorized or personal use of the card, policies and procedures, improper transactions, NPWC internal procedures, other required training, the NAVSUP and Citibank Web sites, and our findings from the previous purchase card testimony and related report. All but 15 of NPWC’s cardholders and approving officials attended the mandatory NAVSUP training, and on October 26, 2001, NPWC canceled the 15 remaining cardholder accounts for noncompliance with the training requirements. Both SPAWAR Systems Center and NPWC have recently made some efforts to implement new policies directed at improving internal review and oversight activities, which, as we previously testified, were ineffective. We also testified that the Navy’s purchase card policies and procedures did not require that the results of internal reviews be documented or that corrective actions be monitored to help ensure that they are effectively implemented. While still relatively ineffective, this area has great potential to strengthen the control environment at these two Navy units. We also previously testified that, although the SPAWAR Headquarters Command inspector general (IG) reviewed purchase card transactions generated by Headquarter cardholders during fiscal year 2000 and prepared a draft report summarizing the results of this review, the final report had not been issued at the conclusion of our fieldwork for the July 30, 2001, testimony. The final report of this review was issued on July 19, 2001, and identified many of the internal control findings discussed in our prior review; however, the IG’s report did not identify the kind of abusive transactions we identified. Also, on August 13, 2001, the Command IG began a limited review of the 2 most recent months of purchase card activity for Headquarters cardholders. The summary findings, which were released in a report dated October 16, 2001, have many of the internal control findings discussed later in this statement and similarly point to the need for clear, comprehensive policies, procedures, and training to resolve many of the control weaknesses and instances of questionable transactions. The IG also reported that it found some “transactions that appeared to be either ‘excessive’ or may have been of questionable good judgment,” but did not provide examples of these potentially abusive transactions. The IG also reported that several cardholders had stated that they felt uncomfortable making purchases, but did not want to tell their supervisor “no” and suffer potentially adverse career consequences. At the July 30, 2001, hearing we reported that the Naval Audit Service had conducted an audit of the NPWC purchase card program for which a report had not been issued. The Naval Audit Service completed its audit in December 2000 and reviewed transactions primarily occurring from March 1999 through August 2000. The Naval Audit Service issued its report over 1 year later, on January 10, 2002. Some of the Naval Audit Service findings are of the same nature and significance as the findings reported in our previous testimony, although the Naval Audit Service report did not identify the improper or abusive transactions we discussed. The Naval Audit Service concluded that management of the purchase card program at NPWC was not sufficient to ensure the integrity of the command’s purchase card program and that NPWC’s internal operating procedures did not clearly define duties and responsibilities or adequately control the various processes involved in purchase card transactions. Further, the Naval Audit Service reported that maintenance and repair services were obtained on a “piece-meal” basis instead of being aggregated and performed as entire projects, which resulted in NPWC not taking advantage of its buying power to obtain discounts on its recurring purchases. Further, in August 2001, following the July 30, 2001, purchase card congressional hearing, NAVSUP directed all Navy units to review 12 months of purchase card transactions. In response to this requirement, both SPAWAR Systems Center and NPWC reviewed samples of transactions, although neither performed an in-depth analysis of the selected transactions. For example, SPAWAR Systems Center told us that it reviewed 16,393 of the 45,318 transactions for the 9-month period ended July 2001. According to SPAWAR Systems Center, its stand-down review identified 187 split purchases and 9 transactions that initially appeared questionable or suspicious. After completing their review, SPAWAR Systems Center officials concluded that only one of these nine transactions was not for a legitimate government purpose, because the cardholder in question accidentally used the purchase card instead of a personal credit card. However, we question whether the stand-down review was designed and performed to be a thorough and critical analysis of the nature and magnitude of the control weaknesses and the extent to which fraudulent, improper, or abusive transactions were occurring during the 9-month period reviewed. Our own statistical sample of 50 transactions from just the last 3 billing cycles of fiscal year 2001 found one potentially fraudulent and subsequently disputed purchase and a total of 11 abusive or improper transactions on the monthly statements for 9 cardholders. Furthermore, as detailed later, we found numerous examples of abusive and improper transactions occurring in the first nine billing cycles of fiscal year 2001. NPWC’s stand-down review subjected 9,099 transactions out of 50,850 for the 12-month period ended August 31, 2001, to a documentation review. The review identified several cases of potential improper use and 320 cases of potential split purchases. However, the primary finding related to the use of the card for prohibited acquisitions of “noncommonly used” hazardous materials. NPWC estimated that approximately 600 of the transactions reviewed violated the Navy’s prohibition against using the purchase card to acquire noncommonly used hazardous materials. Specifically, Navy purchase card policies and procedures require that prior to acquiring potentially hazardous materials, cardholders must first determine that a requested purchase meets the definition of a commonly used hazardous material and that the materials are carried on the unit’s Authorized Use List. If the requested purchase does not meet the “commonly used” definition, the hazardous materials are to be procured by other means that bring the hazardous material under the control of a Hazardous Substance Management System (HSMS). Compliance with these requirements would then help ensure the safe storage, use, and disposal of the hazardous materials. NPWC found that cardholders were using the purchase card to acquire noncommonly used hazardous materials such as bacterial control agents and toxic, corrosive solvents used to descale and deodorize sewage systems. Such hazardous material purchases were not being subjected to the required controls and, consequently, NPWC had no assurance that the approximately 600 reported purchases were stored, used, and disposed of in a safe and environmentally acceptable manner. To alleviate this problem, NPWC is working with the Fleet Industrial Supply Service to coordinate the maintenance and control of Navy hazardous materials. NPWC’s identification and proactive attitude towards resolving this matter again demonstrate a positive control environment. GAO’s internal control standards state that management plays a key role in demonstrating and maintaining an organization’s integrity and ethical values, “especially in setting and maintaining the organization’s ethical tone, providing guidance for proper behavior, removing temptations for unethical behavior, and providing discipline when appropriate.” At the time we began our follow-up review, the SPAWAR Systems Center commanding officer not only did not demonstrate a commitment to improving management controls but openly supported the status quo. Consequently, the lack of a positive control environment continued. In contrast, the commanding officer at NPWC continued to support a proactive attitude in addressing the weaknesses we identified and took immediate action to address any improper or prohibited uses of the purchase card. It is not surprising that, given these differences in the management tone at the two units, we continued to find numerous examples of potentially improper, abusive, and otherwise questionable use of the purchase card at SPAWAR Systems Center, while we found few such cases at NPWC. The former SPAWAR Systems Center commanding officer testified on July 30, 2001, that the purchase card program at SPAWAR Systems Center had effective management controls and an honest and trustworthy workforce. The commanding officer went on to incorrectly characterize our audit approach and findings by stating that there was not a pervasive and serious abuse and fraud problem at SPAWAR Systems Center and that over 99.98 percent of purchases made by cardholders were for legitimate government purposes. The commanding officer did not acknowledge that the serious weaknesses in SPAWAR Systems Center’s system of internal controls over the purchase card program left SPAWAR Systems Center vulnerable to the types of abusive and improper transactions that we found and that such abuses could occur without being detected. Upon his return to San Diego following the hearing, the commanding officer held an “all-hands” meeting at a SPAWAR Systems Center auditorium that cardholders, approving officials, and managers were particularly encouraged to attend “… to clarify the substantial differences between the perception of problems reported in the press and the reality of the situation.” At the meeting, the commanding officer showed a videotape of the entire congressional hearing. By denying that these weaknesses resulted in undetected misuse of purchase cards, the commanding officer effectively diminished the likelihood that substantive changes would be implemented or, if implemented, taken seriously. The underlying message of his testimony, his subsequent “all hands” meeting, and his meetings with us, was that the trust SPAWAR Systems Center management had in its staff was an acceptable substitute for a cost-effective system of internal controls. The commanding officer was relieved of duty in December 2001 for matters unrelated to the purchase card program. The admiral in charge of SPAWAR held a nonjudicial punishment hearing on December 8, 2001, and found that the commanding officer had violated two articles of the Uniform Code of Military Justice, including dereliction of duty and conduct unbecoming an officer. The admiral issued the commanding officer a Punitive Letter of Reprimand, relieved him of his command at SPAWAR Systems Center, and endorsed his request for retirement from the Navy. The new commanding officer at SPAWAR Systems Center now has an opportunity to set a “tone at the top” that reflects a true commitment to establishing a positive control environment. Based on our discussions with the commanding officer and some of the actions we have observed, we are encouraged by her commitment to ensure that an effective, well-controlled purchase card program is implemented at SPAWAR Systems Center. At the same time, we remain concerned that there will be significant cultural resistance to change in the internal control environment. For example, up to the time we completed our fieldwork in February 2002, some cardholders and managers continued to rationalize the questionable purchases we brought to their attention—including expensive laptop carrying cases, Lego robot kits, clothing, food, and designer day planners— as discussed later in this statement. Such an attitude perpetuates an overall environment that tacitly condones possibly fraudulent wasteful, abusive, or otherwise questionable spending of government funds. Basic internal controls over the purchase card program remained ineffective during the last quarter of fiscal year 2001 at the two units we reviewed. Based on our tests of statistical samples of purchase card transactions, we determined that the two key transaction-level controls that we tested were ineffective, rendering SPAWAR Systems Center and NPWC purchase card transactions vulnerable to fraudulent and abusive purchases and theft and misuse of government property. As shown in table 4, the specific controls that we tested were (1) independent, documented receipt and acceptance of goods and services and (2) independent, documented review and certification of monthly purchase card statements. In addition, we attempted to test whether the accountable items—easily pilferable or sensitive items—included in some of the transactions in our samples were recorded in the units’ property records to help prevent theft, loss, and misuse of government assets. However, we were unable to perform those tests because SPAWAR Systems Center had recently changed its policy and no longer maintains accountability over easily pilferable items such as personal digital assistants and digital cameras. Further, our statistical sample at NPWC did not identify any accountable property items. SPAWAR Systems Center did not have independent, documented evidence that they received and accepted items ordered and paid for with the purchase card, which is required by Navy policy. That is, they generally did not have a receipt for the acquired goods and services that was signed and dated by someone other than the cardholder. As a result, there is no documented evidence that the government received the items purchased or that those items were not lost, stolen, or misused. Based on our testing, we estimate that SPAWAR Systems Center did not have independent, documented evidence to confirm the receipt and acceptance of goods and services acquired with the purchase card for about 56 percent of its fourth quarter fiscal year 2001 transactions. We previously reported a 65 percent control failure rate for fiscal year 2000. NPWC improved its adherence to the internal control of documenting independent receipt and acceptance of items acquired with a purchase card, although its 16 percent failure rate in this control technique remained unacceptable. We previously testified that NPWC generally did not have documented independent receipt and acceptance for goods and services and reported a 47 percent control failure rate for fiscal year 2000. The improved results for NPWC are the result of management attention to this important control and increased training for cardholders. Throughout fiscal year 2001, SPAWAR Systems Center and NPWC still did not properly review and certify the monthly purchase card statements for payment. We previously reported that SPAWAR Systems Center and NPWC approving officials who certify the monthly purchase card statements for payment generally rely upon the silence of a cardholder to assume that all purchase card transactions listed on the monthly statements are valid government purchases. However, this process does not compensate for the fact that a cardholder might have failed to forward corrections or exceptions to the account statement in a timely manner or, even worse, may not have reviewed the statement. As a result of the breakdown of this control, for the fourth quarter of fiscal year 2001, SPAWAR Systems Center and NPWC were paying the monthly credit card bills without any independent review of the monthly cardholder statements prior to payment to verify that the purchases were for a valid, necessary government need. Under 31 U.S.C. 3325 and DOD’s Financial Management Regulation, disbursements are required to be made on the basis of a voucher certified by an authorized agency official. The certifying official is responsible for ensuring (1) the adequacy of supporting documentation, (2) the accuracy of payment calculations, and (3) the legality of the proposed payment under the appropriation or fund charged. The certification function is a preventive control that requires and provides the incentive for certifying officers to maintain proper controls over public funds. It also helps detect fraudulent and improper payments, including unsupported or prohibited transactions, split purchases, and duplicate payments. Further, section 933 of the National Defense Authorization Act for Fiscal Year 2000 requires the Secretary of Defense to prescribe regulations that ensure, among other things, that each purchase card holder and approving official is responsible for reconciling charges on a billing statement with receipts and other supporting documentation before certification of the monthly bill. We previously reported that NAVSUP policy is inconsistent with the purpose of certifying vouchers prior to payment and made recommendations to revise the policy appropriately. Navy agreed with our recommendations concerning the need to change this portion of the purchase card instruction. For the last quarter of fiscal year 2001, SPAWAR Systems Center continued to have only one approving official to certify for payment the monthly purchase card statements of almost 1,000 cardholders. This unacceptable span of control led us to conclude that all transactions selected as part of our statistical sample were not properly reviewed and approved by a certifying officer. NPWC also continued to inappropriately certify purchase card statements for payment before receiving cardholder assurance that the purchases were proper. Our review of purchase card transactions disclosed that no significant change in this process had taken place during the fourth quarter of fiscal year 2001, and we therefore identified a 100 percent failure rate for this control at SPAWAR Systems Center and NPWC. However, in keeping with its proactive attitude, instead of waiting for NAVSUP to issue its new purchase card payment certification procedures, the NPWC agency program coordinator issued local guidance in December 2001 that requires approving officials, prior to certifying their summary invoice for payment, to obtain notifications from cardholders that their statements do not include disputed items. The guidance also indicates that approving officials and cardholders should conduct ongoing reviews during the month of the transactions in their purchase card accounts using Citidirect online services. While this does not fully implement the recommendation that we made in our November 30, 2001 report, this is a positive interim step. Given the significant reduction in individual approving officials’ span of control this measure provides NPWC an opportunity to strengthen this control. We disagree with a change in SPAWAR Systems Center policy that eliminated the accountability of certain property items considered to be pilferable. Recording items in the property records that are easily converted to personal use and maintaining serial number and bar code control is an important step in ensuring accountability and financial control over such assets and, along with periodic inventory, in preventing theft or improper use of government property. We previously testified that most of the accountable items—easily pilferable or sensitive items—in our samples for fiscal year 2000 were not recorded in property records. On August 1, 2001, the Department of the Navy changed its definition for what constitutes pilferable property. Unlike the previous policy, which was prescriptive in identifying what was pilferable, the new policy provides commanding officers with latitude in determining what is and what is not pilferable. Specifically, the new policy defines pilferable to be an item— regardless of cost—that is portable, can be easily converted to personal use, is critical to the activity’s business/mission, and is hard to repair or replace. Citing the “hard to repair or replace” criteria in the new policy, on November 1, 2001, SPAWAR Systems Center determined that only computer systems and notebook/laptop computers would be considered pilferable items. Thus, based on our fiscal year 2000 and 2001 audit work, SPAWAR Systems Center did not maintain accountability over numerous sensitive and pilferable items, such as digital cameras and personal digital assistants (PDA), leaving them subject to possible theft, misuse, or transfer to personal use. SPAWAR Systems Center’s new commanding officer and executive director told us that they do not believe that it is cost beneficial to account for and track these assets, but instead rely on supervisory oversight and personal employee trust to provide the necessary accountability of these assets. The commanding officer and the executive director stated that SPAWAR Systems Center is a diversified organization in which its scientists and engineers are working on as many as 1,000 different projects at any one time, which would make it difficult to keep track of these lower cost items. We acknowledge the important mission that SPAWAR Systems Center serves, but we also believe that the diverse nature of its operations is one of the key reasons why SPAWAR Systems Center needs to maintain accountability of its pilferable items. As discussed later in this testimony, we believe that SPAWAR Systems Center’s lack of accountability over items that are pilferable contributed to several abusive and questionable purchases. Although NPWC also had the opportunity to redefine what constitutes pilferable property, NPWC did not institute a similar policy change. Unlike SPAWAR Systems Center, NPWC generally does not use the purchase card to buy property items that are pilferable or easily converted to personal use. As a result, our sample of fourth quarter fiscal year 2001 NPWC transactions did not include any accountable items. SPAWAR Systems Center officials stated that they have implemented a new Enterprise Resource Planning (ERP) system that is designed to address most of the weaknesses that we identified in our July 2001 testimony. Once effectively implemented, the ERP system would facilitate on-line review, reconciliation, and monitoring of credit card activity. The system would also result in reduced storage needs because ERP requires receipt and acceptance documentation to be scanned into a database storage container. However, our limited assessment of the control environment identified several weaknesses. Although the new system has the stated capability to address the weaknesses we identified in the purchase card program, until it is effectively implemented and individuals comply with purchase card policies and procedures, SPAWAR Systems Center has little assurance that the weaknesses we previously identified will be corrected or mitigated. For example, the implementation of the ERP system at the time of our review did not provide for an adequate separation of duties or proper certification of purchase card transactions for payment. Specifically, a systems administrator with high-level administrative access privileges on the system performed both cardholder and approving official duties. In addition, the administrator pushed transactions through the system as an approving official without the required cardholder reconciliation or any knowledge of the transactions. Further, the administrator, who performed approving official duties, did not review the transactions to determine if they complied with Navy policies and procedures. That responsibility remained with the existing approving official; however, as we previously testified about the manual process, we found no evidence that the approving official verified compliance. SPAWAR Systems Center officials stated that by the end of February 2002, the administrator should no longer have these duties because all of the newly designated approving officials will have completed the required ERP training. We have not verified this corrective action or whether the approving officials are properly performing their duties. In assessing the control environment, we attempted, but were unable, to obtain documentation such as (1) the DOD Information Technology Security Certification and Accreditation Process (DITSCAP) for the system and (2) formal procedures on granting and removing access to the ERP. First, SPAWAR Systems Center officials stated that the certification and accreditation for the ERP system was not complete and that it was currently operating under interim authority. The DITSCAP would give an indication as to whether SPAWAR Systems Center had established its information security requirements and whether the system implementation meets the established security requirements. Second, although SPAWAR Systems Center had an informal process for granting and removing system access, these procedures had not yet been formally documented. Establishing such formal control procedures helps ensure that authorized users have the appropriate access to perform their job duties. We identified numerous examples of improper, abusive, or questionable transactions at SPAWAR Systems Center during fiscal year 2001. Given the weaknesses in the overall internal control environment and ineffective specific internal controls, it is not surprising that SPAWAR Systems Center did not detect or prevent these types of transactions. In fact, most of the transactions that we identified as improper, abusive, or questionable at SPAWAR Systems Center were approved and represented to us as being an appropriate, proper use of the purchase card. In contrast, using the same data mining techniques at NPWC, the number and severity of the problems we identified were substantially less than at SPAWAR Systems Center. In addition, rather than dispute our findings on each transaction, NPWC showed a proactive response and not only concurred with our findings but immediately took action to prevent future improper or abusive transactions from occurring. As discussed in appendix II, our work was not designed to identify, and we cannot determine, the extent of fraudulent, improper, and abusive or otherwise questionable transactions. Further, our review of SPAWAR Systems Center and NPWC transactions for potentially fraudulent, improper, and abusive or otherwise questionable purchases was limited and not intended to represent the population of SPAWAR Systems Center and NPWC transactions. Specifically, we reviewed a total of 161 SPAWAR Systems Center and 145 NPWC fiscal year 2001 transactions and performed additional analysis of related activity at three specific vendors as discussed in appendix II. To test those transactions and related activity, we examined all available documentation supporting the transactions, and when necessary we interviewed NPWC and SPAWAR Systems Center staff. To put the number of transactions that we reviewed into perspective, during fiscal year 2001 SPAWAR Systems Center and NPWC processed a total of about 83,000 transactions. Thus, the potentially fraudulent, improper, and abusive or questionable transactions we identified relate to the 306 transactions and associated activity we reviewed. We cannot project the extent of potentially fraudulent, improper, or abusive transactions for SPAWAR Systems Center or NPWC to the entire population of fiscal year 2001 transactions. See appendix II for a more detailed discussion of our objectives, scope, and methodology. We considered potentially fraudulent purchases to include those made by cardholders that were unauthorized and intended for personal use. Some of these instances involved the use of compromised accounts, in which an actual Navy purchase card or an active account number was stolen and used to make a fraudulent purchase. Other cases involved vendors charging Navy purchase cards for unauthorized transactions. Both SPAWAR Systems Center and NPWC had policies and procedures that were designed to prevent the payment of fraudulent purchases; however, our tests showed that although both units made some improvements, particularly NPWC, they did not implement the controls as intended. For example, as discussed previously, controls were ineffective for independent verification of receipt and acceptance and proper review and certification of monthly statements prior to payment. Fraudulent activities must therefore be detected after the fact, during supervisor or internal reviews, and disputed charge procedures must be initiated to obtain a credit from Citibank. Table 5 shows examples of potentially fraudulent transactions that we identified at SPAWAR Systems Center. Using the same audit techniques, we did not find documented evidence of potentially fraudulent NPWC transactions for fiscal year 2001. However, as noted previously, our tests were not designed to identify all fraudulent transactions, and considering the control weaknesses identified at SPAWAR Systems Center and NPWC, and the substantial number of compromised accounts discussed later, fraudulent transactions may have occurred during fiscal year 2001 and not have been detected. The fact that all of the unauthorized transactions in table 5 were authorized for payment by SPAWAR Systems Center clearly demonstrates the lack of an effective review and monthly certification process. SPAWAR Systems Center officials told us that they were aware of all of these potentially fraudulent transactions and eventually received a credit from either the vendor or Citibank or reimbursement from the cardholder, but in some cases after many months. For example, the car rental transaction related to a SPAWAR Systems Center employee who stated that she had inadvertently used the purchase card rather than a personal credit card. However, it took the employee 5 months to reimburse the government for this personal and unauthorized charge. Three of the examples in table 5 relate to the 2,595 Navy purchase card compromised accounts discussed below. The card numbers used to make the internet purchases were not on the list of compromised accounts. These cardholders reported to Citibank that the transactions were unauthorized, and Citibank provided credits to their accounts for disputed amounts up to three months after SPAWAR Systems Center paid the bill. The $10,600 of potentially fraudulent charges represent numerous unauthorized charges, many of which were about $500 each, during fiscal year 2001 by a safety product vendor that SPAWAR Systems Center paid despite the fact that no goods were received. As of January 21, 2002, SPAWAR Systems Center had not received a credit from the bank or the vendor for about $3,100 of the unauthorized charges. In our July 2001 testimony, we identified about $12,000 in potentially fraudulent fiscal year 2000 transactions on the purchase card of a former NPWC employee. NPWC Command Evaluation staff researched the potentially fraudulent charges, and NPWC eventually disputed them and recovered the full amount from the bank. Our Office of Special Investigations conducted an investigation of the suspect employee to determine if these transactions were indeed fraudulent. This investigation identified the following. The purchases occurred primarily between December 20 and 26, 1999, and included an Amana range, Compaq computers, gift certificates, groceries, and clothes. Based on our research, most of the merchants noted that these were not phone orders and someone presented the purchase card in question to make the purchases. The cardholder brought the January 2000 credit card statement, with the above charges on the bill, to her supervisor for his approval and signature. According to the supervisor, the cardholder told him that she needed the statement signed immediately because she was late in processing it. The supervisor signed the credit card statement without reviewing it. The cardholder claims to have disputed the charges on January 31, 2000. Citibank indicated that it did not receive the dispute documentation until August 23, 2000, and the bank did not credit the Navy for these charges until April 2001. Based on an examination of the handwriting specimens by the U.S. Secret Service Forensic Services Division, the fraudulent purchase receipts were probably signed by someone other than the cardholder and all appear to have been signed by the same individual. The Amana range was bought with a gift card that was purchased in the name of the cardholder’s alleged ex-boyfriend’s mother. The cardholder left NPWC to work for the U.S. Pacific Fleet from June to November of 2000 and now works at the Pentagon. After leaving work on her last day at NPWC, the cardholder improperly used the NPWC purchase card—which should have been canceled—for a personal automobile rental that was initially paid by NPWC and subsequently reversed through a credit from Citibank. The cardholder was supposed to, but has not yet, repaid Citibank the $358 owed. The cardholder also misused a government travel card by purchasing three airline tickets for personal use. The cardholder partially repaid the cost of the tickets but had a remaining balance of $379. The Bank of America has written off the balance of the cardholder’s account. The facts of this case demonstrate a complete breakdown in internal controls, particularly in the area of proper review and certification of monthly statements. The individual who approved the payment to Citibank for these fraudulent charges told us that he signed off on the January 2000 statement without reviewing it to determine if the transactions were valid. It is unclear whether the credit NPWC ultimately received was the result of the Citibank investigation of the case or NPWC’s determining some time after payment of the bill that the charges were fraudulent. NPWC also did not properly cancel the purchase card account of this cardholder after the cardholder had moved on to another organization within the Navy. Further, NPWC paid the purchase card bill that included this cardholder’s personal automobile rental, a clear indication that the monthly review and certification of bills was not being done. Finally, as of February 6, 2002, no disciplinary actions had been taken against this cardholder. Our Office of Special Investigations referred this case back to the Naval Criminal Investigative Service for further investigation and, if warranted, prosecution. We also followed up on the previously reported September 1999 compromise of up to 2,600 purchase card accounts assigned to Navy activities in the San Diego area. We reported that Navy investigators were able to identify only a partial list consisting of 681 compromised accounts. We recommended that the Navy act immediately to cancel all known active compromised accounts. In December 2001, Navy notified us that all 681 compromised accounts we identified in the July testimony were cancelled, including 22 active SPAWAR Systems Center accounts. However, no other action was taken by the Navy to identify or cancel the remaining nearly 2,000 accounts that were compromised in September 1999. Our investigators subsequently identified the source of the compromised accounts as the database of a Navy vendor, which provided NCIS with the names of its former employees who were possible suspects in the theft of data. In January 2002, the vendor provided our investigators with the entire list of the 2,595 compromised accounts. We provided this list to the Navy and recommended that it immediately cancel the remaining 1,914 compromised account numbers. We found that 78 SPAWAR Systems Center and 10 NPWC compromised accounts were active as of December 2001. As noted previously, 3 of the examples of potentially fraudulent SPAWAR Systems Center activity reported in table 5 involved these compromised accounts. As we reported in our previous testimony, as of January 2001, at least 30 of the nearly 2,600 compromised account numbers were used by 27 alleged suspects to make more than $27,000 in fraudulent transactions for pizza, jewelry, phone calls, tires, and flowers. However, with the lack of effective controls over independent receipt for goods and services and proper review and certification of purchase card statements for payment that we identified at the two units, it will be difficult, if not impossible, for the Navy—including SPAWAR Systems Center and NPWC—to identify fraudulent purchases as they occur, or to determine the extent of the fraudulent use of compromised accounts. On December 11, 2001, the NCIS case on the compromised Navy purchase card numbers was presented to the U.S. Attorney’s Office, Southern District of California, San Diego, for prosecution. The U.S. Attorney’s Office declined prosecution of the case due to the low known dollar loss of $28,734. The NCIS case was closed on December 20, 2001. The following are other cases of potential fraudulent activity. A fraud hotline call alerted NPWC to a case involving two NPWC employees, an air conditioning equipment mechanic—who was a purchase card holder—and his supervisor. The alleged fraud includes the element of collusion, which internal controls generally are not designed to prevent. However, adequate monitoring of purchase card transactions, along with the enforcement of controls—such as documentation of independent confirmation of receipt and acceptance and recording of accountable items in property records—will make detection easier. In this case, the cardholder allegedly made fraudulent purchase card acquisitions during the period of April 1999 through December 1999 to obtain electronic planners, leather organizers, a digital camera, a scanner/printer, and various cellular telephone accessories for himself and his supervisor. These items totaled more than $2,500. NPWC initiated administrative action and gave a notice of proposed removal to the cardholder on August 15, 2000, and to the supervisor on August 1, 2000. Both employees resigned after they had repaid the Navy nearly $6,000 but before formal removal. Criminal actions were not taken against the individuals. SPAWAR Systems Center’s Command Evaluation is currently investigating purchases made by cardholders in one of SPAWAR Systems Center’s divisions. This is an ongoing investigation focused on transactions made during the period August 2000 through April 2001. Preliminary findings resulted in a request from Command Evaluation to the SPAWAR Systems Center agency program coordinator to suspend purchase card authority for all cardholders and approving officials in the affected division until the investigation is completed. Our Office of Special Investigations is conducting a further investigation of about $164,000 in transactions during fiscal year 2001 between SPAWAR Systems Center and one of its contractors for potentially fraudulent activity. The SPAWAR Systems Center division responsible for these purchase card transactions is the same department that SPAWAR Systems Center’s Command Evaluation is currently reviewing, as discussed in the previous bullet. This case is discussed in more detail in the following section on improper purchases. We identified transactions for SPAWAR Systems Center and NPWC that were improper, including some that involved the improper use of federal funds. The transactions we determined to be improper are those purchases intended for government use, but are not for a purpose that is permitted by law, regulation, or DOD policy. We also identified as improper numerous purchases made on the same day from the same vendor that appeared to circumvent cardholder single transaction limits. Federal Acquisition Regulation and NAVSUP Instruction 4200.94 guidelines prohibit splitting purchase requirements into more than one transaction to avoid the need to obtain competitive bids on purchases over the $2,500 micropurchase threshold or to circumvent higher single transaction limits for payments on deliverables under requirements contracts. We identified these improper transactions as part of our review of about 161 SPAWAR Systems Center and 145 NPWC fiscal year 2001 transactions and related activity. We identified most of these transactions as part of our data mining of transactions with questionable vendors, although several were identified as part of our statistical sample. The Federal Acquisition Regulation, 48 C.F.R. 13.301(a), provides that the governmentwide commercial purchase card “may be used only for purchases that are otherwise authorized by law or regulations.” Therefore, a procurement using the purchase card is lawful only if it would be lawful using conventional procurement methods. Under 31 U.S.C. 1301(a), “ppropriations shall only be applied to the objects for which the appropriations were made . . .” In the absence of specific statutory authority, appropriated funds may only be used to purchase items for official purposes, and may not be used to acquire items for the personal benefit of a government employee. As previously discussed NPWC identified approximately 600 transactions that violated the Navy’s prohibition against using the purchase card to acquire noncommonly used hazardous materials. As shown in table 6, we found examples of purchases that were not authorized by law, regulation, or policy. Food. We found a number of purchases of food at SPAWAR Headquarters, SPAWAR Systems Center and NPWC that represent an improper use of federal funds. Without statutory authority, appropriated funds may not be used to furnish meals or refreshments to employees within their normal duty stations. Free food and other refreshments normally cannot be justified as a necessary expense of an agency’s appropriation because these items are considered personal expenses that federal employees should pay for from their own salaries. In January 2000, the General Services Administration (GSA) amended the government travel regulations to permit agencies to provide light refreshments to employees attending conferences involving travel. In response to GSA’s action, DOD amended the Joint Travel Regulation (JTR) and Joint Federal Travel Regulation (JFTR) to permit similar light refreshments for DOD civilian employees and military members. In April 2001, DOD clarified the JTR/JFTR rule to permit light refreshments only when a majority of the attendees (51 percent or more) are in travel status. The following food purchases should not have been paid for with appropriated funds. Three instances in which NPWC purchased primarily meals and light refreshments for employee-related activities, including team meetings, at a cost of about $4,100. The supporting documentation we were provided initially by NPWC showed these purchases to be the rental of rooms for meetings. However, after our further inquiry of the Admiral Kidd Catering Center we found that a large portion of the purchases were related to food and refreshments including luncheon buffets. Officials from the Admiral Kidd Catering Center indicated that the invoices for these events do not show the food purchases because they knew that the Navy is not allowed to pay for food at these conferences. Five instances in which SPAWAR Headquarters or Systems Center cardholders purchased primarily light refreshments for employee team meetings or training sessions when less than a majority of the attendees were on travel, at a total cost of about $1,000. One transaction in which a SPAWAR Headquarters program management office had a 2-day off-site meeting at a San Diego hotel for about 20 staff, and SPAWAR Headquarters provided all participants with lunch and refreshments. The cardholder provided us with documentation indicating that SPAWAR Headquarters spent $2,400 to rent a room at the hotel where the meeting was held. The assistant program manager told us that the $2,400 charge was just for the meeting room rental. However, we obtained documents directly from the hotel, which were signed by the assistant program manager, that prove that SPAWAR Headquarters paid about $1,400 for lunch and refreshments for both days. Furthermore, by comparing the hotel’s copy of the event confirmation form with the copy of the same form provided by SPAWAR Headquarters, it appeared that the form had been altered to indicate that the $2,400 was only for rent. After briefing SPAWAR Headquarters and System Center management of our findings, the SPAWAR Headquarters inspector general opened an investigation of this matter that is still ongoing. Clothing. We identified several purchases of clothing by SPAWAR Systems Center employees that should not have been purchased with appropriated funds. According to 5 U.S.C. 7903, agencies are authorized to purchase protective clothing for employee use if the agency can show that (1) the item is special and not part of the ordinary furnishings that an employee is expected to supply, (2) the item is essential for the safe and successful accomplishment of the agency’s mission, not solely for the employee’s protection, and (3) the employee is engaged in hazardous duty. Further, according to a comptroller general decision dated March 6, 1984, clothing purchased pursuant to this statute is property of the U.S. government and must only be used for official government business. Thus, except for rare circumstances in which a clothing purchase meets stringent requirements, it is usually considered a personal item that should be purchased by the individual. For the transactions that we tested, we found that several SPAWAR Systems Center employees had purchased clothing, such as a lightweight hooded jacket, long pants, and a shirt that should have been purchased by the employees with their own money. One of the cardholders told us that he believed his purchases of clothing were appropriate because other SPAWAR Systems Center employees were also purchasing clothing. As a result of this statement, we expanded our analysis and found that during fiscal year 2001 SPAWAR Systems Center cardholders purchased about $4,400 worth of socks, gloves, parkas, jackets, hats, shirts, and sweatpants from REI and Cabela’s that appear to also be improper. Because we did not test each of these transactions to determine if they were adequately justified, we included the $4,400 as questionable clothing purchases in table 8. Luxury car rentals. We identified 34 fiscal year 2001 purchases totaling $7,028 in which NPWC could not support the representation that rentals of Lincoln Town Cars or similar luxury cars were for individuals authorized to obtain a luxury car. DOD policy provides that only four-star admirals and above (or equivalent) qualify to rent such luxury vehicles. Our analysis of NPWC’s fiscal year 2001 purchase card transactions for rentals of commercial vehicles disclosed 42 instances of rentals of luxury vehicles (e.g., Lincoln Town Cars and Cadillac DeVilles). NPWC cardholder documentation showed that only 8 of the 42 rentals were for four-star admirals. In the other 34 instances, cardholder documentation either disclosed that a rental of a Lincoln Town Car or similar vehicle was for a Navy captain or lower-ranking admiral, or the documentation was insufficient to determine who rented the automobile. As a result of its inappropriately renting the Lincolns and other luxury cars, we estimated that NPWC spent about $2,000 more than it would have if NPWC had rented an automobile that was consistent with DOD policy. Consistent with NPWC’s proactive approach, the day after we brought this issue to management’s attention, controls and procedures were put in place to resolve this issue. Because these purchases were at an excessive cost, they also fall under the definition of abusive transactions. Prepayment of goods and services. We also identified 75 SPAWAR Systems Center purchase card transactions, for about $164,000 with a telecommunications contractor, that appear to be advance payments for electrical engineering services. Section 3324 of title 31, United States Code, prohibits an agency from paying for goods or services before the government has received them (with limited exceptions). Further, Navy purchase card procedures prohibit advance payment for goods and services, except in cases such as subscriptions and post office box rentals. SPAWAR Systems Center project managers gave us with several conflicting explanations of the nature of the arrangement with the contractor, first indicating that the charges were for time and materials and later stating that each purchase was a fixed-fee agreement. No documentation was provided to support either explanation. We were also told by SPAWAR Systems Center employees that the purchase card was used to expedite the procurement of goods and services from the contractor because the preparation, approval, and issuance of a delivery order was too time- consuming in certain circumstances. For all 75 transactions, we found that the contractor’s estimated costs were almost always equal or close to the $2,500 micropurchase threshold. Because we found no documentation of independent receipt and acceptance of the services provided or any documentation that the work for these charges was performed, these charges are also potentially fraudulent. We therefore referred the SPAWAR Systems Center purchase card activity with this contractor to our Office of Special Investigations for further investigation. Convenience checks. We found that SPAWAR Systems Center improperly used convenience checks in fiscal year 2001, which ultimately resulted in NAVSUP canceling the use of these checks at SPAWAR Systems Center in November 2001, after we made inquires concerning the number of SPAWAR Systems Center convenience checks issued that exceeded the $2,500-per- check limit. Convenience checks are charged directly to the government purchase card account and are used to pay vendors and other government agencies that do not accept the purchase card. According to the SPAWAR Systems Center agency program coordinator, two Citibank convenience check accounts were established in December 1998, presumably before NAVSUP changed its policy allowing only one account per unit. The SPAWAR Systems Center head of supply and contracts canceled one of these accounts on November 15, 2001, after we made inquires concerning SPAWAR Systems Center’s convenience check usage. We found that the two employees responsible for these two accounts had issued 187 checks during fiscal year 2001, 30 of which were in excess of the $2,500 limit for individual checks, for a total of over $347,000. The checks that exceeded the $2,500 limit were issued to pay for postage meter charges, various services to vendors who were sole source providers, and training. After we made inquires to the DOD Purchase Card Program Office regarding the propriety of SPAWAR Systems Center’s writing convenience checks in excess of $2,500, NAVSUP canceled SPAWAR Systems Center’s convenience check privileges on November 20, 2001. We also believe the use of convenience checks for over $2,500 purchases is not economical because of the 1.25 percent fee charged per transaction. For example, SPAWAR Systems Center used convenience checks to make one purchase of $10,000 for postage, which resulted in a fee of $125. Printing. In addition, we identified several instances in which SPAWAR Systems Center did not adhere to DOD’s policy to use the Defense Automated Printing Service (DAPS) to perform all printing jobs. Further, the Navy’s purchase card list of prohibited or special-approval items states that cardholders are prohibited from buying printing or duplication services from entities other than DAPS. In two of the transactions that we audited, SPAWAR Systems Center paid about $3,800 to Kinko’s for printing manuals. Sales tax. We identified eight instances of sales taxes paid on SPAWAR Systems Center purchases. Payment of sales tax for the purchase of goods and services for the government is not authorized by law. According to SPAWAR Systems Center employees, these sales tax payments generally occurred when the vendors did not know how to process a nontaxable transaction. Our analysis of the population of fiscal year 2001 transactions made by one or more cardholders on the same day from the same vendor, which appeared to circumvent cardholder single transaction limits, identified about $7.5 million in SPAWAR Systems Center potential split purchases and nearly $3 million in NPWC potential split purchases. The Federal Acquisition Regulation and Navy purchase card policies and procedures prohibit splitting a purchase into more than one transaction to avoid the requirement to obtain competitive bids for purchases over the $2,500 micropurchase threshold or to avoid other established credit limits. Once items exceed the $2,500 micropurchase threshold, they are to be purchased in accordance with simplified acquisition procedures, which are more stringent than those for micropurchases. Our analysis of the population of fiscal year 2001 SPAWAR Systems Center and NPWC transactions identified a substantial number of potential split purchases. To determine whether these were, in fact, split purchases, we obtained and analyzed the supporting documentation for 30 potential split purchases at SPAWAR Systems Center and 20 potential split purchases at NPWC. We found that in many instances, cardholders made multiple purchases from the same vendor within a few minutes or a few hours for items such as computers, computer-related equipment, cell phone services, and small contracts that involved the same, sequential, or nearly sequential purchase order and vendor invoice numbers. Based on our analyses, we concluded that 13 of the 30 SPAWAR Systems Center and 10 of the 20 NPWC purchases that we examined were split into two or more transactions to avoid micropurchase thresholds. Table 7 provides several examples of cardholder purchases that we believe represent split purchases intended to circumvent the $2,500 micropurchase limit or other cardholder single transaction limit. By circumventing the competitive requirements of the simplified acquisition procedures, we believe that in many instances SPAWAR Systems Center and NPWC may not be getting the best prices possible for the government. As a result, these split purchases are likely increasing the cost of government procurements using the purchase card and, thus, at least partially offsetting its benefits. We identified numerous examples of abusive and questionable transactions at SPAWAR Systems Center during fiscal year 2001. Several of the improper transactions for NPWC discussed previously are also abusive or questionable; however, we found no other abusive items related to NPWC in our statistical sample or data mining. We defined abusive transactions as those that were authorized, but the items purchased were at an excessive cost (e.g., “gold plated”) or for a questionable government need, or both. Questionable transactions are those that appear to be improper or abusive but for which there is insufficient documentation to conclude either. For all abusive or questionable items, we concluded that cardholders purchased items for which there was not a reasonable and/or documented justification. Many of the purchases we found to be abusive or questionable fall into categories described in GAO’s Guide for Evaluating and Testing Controls Over Sensitive Payments (GAO/AFMD-8.1.2, May 1993). The guide states: “Abuse is distinct from illegal acts (noncompliance). When abuse occurs, no law or regulation is violated. Rather, abuse occurs when the conduct of a government organization, program, activity, or function falls short of societal expectations of prudent behavior.” Table 8 shows the potentially abusive and questionable transactions we identified at SPAWAR Systems Center for fiscal year 2001. Further, several of these items fall into the category of pilferable items, which, as discussed previously, SPAWAR Systems Center no longer records in its property records and therefore does not maintain accountability over them. For example, the cell phones and headset are items that could easily be converted to personal use without detection as they are not subject to bar coding and periodic inventory. In addition, items that may have limited use on one project could be made available for use on other projects, if their existence and location were recorded in centralized property records. Such visibility could serve to avoid duplicative purchases as well as provide the control needed to help prevent misuse of government property. Room rental and refreshments. We identified meeting room rental and refreshments at Bally’s, a hotel and casino in Las Vegas, which is a questionable transaction. This charge was related to a trip for about 30 staff members from SPAWAR Headquarters. SPAWAR officials told us that the trip was an organizational meeting to work out the details of a planned merger of two program management working groups. According to SPAWAR Headquarters officials, the staff members who attended the organizational meeting spent the first morning of the 3-day trip at Nellis Air Force Base discussing issues related to an ongoing project involving a test and evaluation squadron. The cost of the transaction we reviewed was about $2,300, and we estimate the total cost of the trip was between $15,000 and $20,000. For the specific transaction we reviewed, we found that the same control weaknesses we reported earlier applied, including lack of independent receipt of goods and proper certification of the monthly bill. There was no documentation showing that this transaction had been authorized in advance or that management had fully considered the cost of this trip and potentially less costly alternatives. GAO’s Guide for Evaluating and Testing Controls Over Sensitive Payments notes the importance of the control environment and the need for effective controls related to sensitive payments. A trip for about 30 employees to a Las Vegas hotel and casino for 3 days at a significant cost to the government is clearly sensitive and should be subjected to a high level of scrutiny, with clear documentation and approval in advance of the event. We would expect to see authorization in advance of the procurement by someone at a higher level than the most senior individual involved in the event—in this case, a captain. We found no documented justification to indicate a valid need for this 3-day meeting to be held in Las Vegas nor did we find an evaluation of the cost-benefit of having the meeting in Las Vegas versus alternative sites. Thus, we question whether the entire cost of the trip was a prudent expenditure of government funds. We did not review the travel vouchers and related documentation for the other component costs of the trip such as airfare, rental cars, or hotel bills; however, in estimating the total cost of the trip, we reviewed available documentation related to travel card usage from Bank of America. Cell phone usage. We found significant breakdowns in controls at SPAWAR Systems Center over the use of cell phones and related services, resulting in abusive and wasteful expenditures of government resources. In addition, we found a proliferation of cell phone agreements, with the purchase card being used to purchase equipment and services from more than 40 different cell phone companies at a total cost of $341,000 for fiscal year 2001. According to SPAWAR Systems Center management, they have a master cell phone contract with AT&T Wireless. However, lack of management oversight and a large number of available purchase cards has resulted in individuals with purchase cards or their supervisors deciding who needs a cell phone, which cell phone company to use, and what type of calling plan to purchase. For all but one of the transactions that we audited, we did not find any evidence that the monthly cell phone bills had been independently reviewed to ensure the transactions were reasonable and for valid government purposes. In the large case we audited, we identified a $24,000 monthly bill for about 200 Nextel cell phones and related services that were acquired to provide a voice communication system for coordination and control among various groups during a demonstration and test of a military wide area relay network. The Nextel phones were selected for evaluation as an alternative not for their standard cellular phone-to-phone capability, but for their “group-talk” feature, which provides a wireless “walkie-talkie” like capability for preprogrammed work groups. One of the key control failures with this cell phone procurement was related to SPAWAR Systems Center’s handing out cell phones to project team members and government contractors without keeping an inventory of who had each cell phone. Contractors that used these government cell phones told us that SPAWAR Systems Center officials brought a box of 60 or 70 cell phones to a meeting and handed them out to contractors that were part of the team. The contractors told us that SPAWAR Systems Center provided them with no instructions on proper use of the cell phone. The approximately 200 cell phones were not physically controlled and SPAWAR Systems Center did not have a list of who had the cell phones. Based on further investigation, we found that these contractors were using the cell phones to call friends and family and to conduct other personal business. Based on our review of the bills for this Nextel account—which totaled about $74,000 during fiscal year 2001—we estimated that about $9,200 was spent on long distance and other local telephone calls, which was not the primary intended purpose of these cell phones. In addition to the Nextel contract, we also identified cell phone contracts with large monthly fees for phones that were either used infrequently or not at all. For example, we audited one account with five cell phones. The service for each phone included 500 minutes of airtime, and the basic service cost of each cell phone was $50 a month. For the 3 months of activity we audited, we found that three of the five phones had either no voice activity or very little. For example, one of these cell phones had only 2 minutes of calls during a month that we audited. This is the equivalent of the government paying $25 per minute for airtime. We identified a number of other abusive and questionable charges including the following. One cardholder purchased $775 of luggage including wallets, passport holders, backpacks, neck pouches, and other items. The cardholder told us that these items were used to carry or ship equipment to universities for outreach activities. At the end of the events, the individual told us that the items were given away. There is no documentation available showing the authorization and need to purchase this luggage for purposes of carrying or shipping equipment. This purchase appears abusive because a valid government need is neither apparent nor documented, particularly since the cardholder gave away government property that could easily be converted to personal use. As part of our data mining, we identified purchases of day planners from commercial vendors, including calendar refills along with designer leather holders purchased from Louis Vuitton. By law, government agencies are directed to purchase certain products, including day planners and calendars, from certified nonprofit agencies that employ people who are blind or severely disabled. This program is referred to as the Javits-Wagner-O’Day (JWOD) program, which is intended to provide employment opportunities for thousands of people with disabilities to earn good wages and move toward greater independence. In addition, DOD’s policy requires the use of JWOD sources, whether or not the procurement is made using a purchase card, unless the central JWOD agency specifically authorizes an exception. In this year’s audit, we found that SPAWAR Systems Center employees had purchased three Louis Vuitton calendar refills for $27 each, and we identified three purchases of Louis Vuitton calendar holders at a cost of $255 each in fiscal year 2000. The most expensive JWOD calendar holders— specifically designed for DOD—cost about $40. In addition, we identified about $33,000 in purchases from Franklin Covey of designer and high-cost leather briefcases, purses (totes), portfolios, day planners and refills, palm pilot cases, and wallets. For example, we found leather purses costing up to $195 each and portfolios costing up to $135 each. Many of these purchases are of a questionable government need and should be paid for by the individual. To the extent the day planners and calendar refills were proper government purchases, they were at an excessive cost and, as with the Louis Vuitton day planners, should have been purchased from a JWOD source at a substantially lower cost. Circumventing the JWOD requirements and purchasing these items from commercial vendors is not only an abuse and waste of taxpayer dollars, but shows particularly poor judgment and serious internal control weaknesses. We identified the purchase of three computer bags from SkyMall at a cost of $161 each, and another purchase of a computer bag at a store in Italy for almost $250. All three computer bags were purchased by employees who were traveling on SPAWAR Systems Center business. The cost of these computer bag purchases is excessive compared to other standard bags we found purchased for $25. In addition, the cardholder who purchased the SkyMall bags told us that one of the two bags, along with another bag purchased in a separate transaction, was given to non–SPAWAR Systems Center government employees working on the project. We identified the purchase of a Bose headset at a cost of $299. The cardholder told us that the headset was originally purchased for a project but had never been used on the project. The cardholder stated that he has used the headset to listen to music on official government travel aboard airplanes. We identified four Lego “Mindstorm” computer robot kits that were purchased at Toys R Us at a total cost of $800. The SPAWAR Systems Center employee who requested that these robots be purchased initially told us that they were purchased as a learning tool for new professionals and junior engineers to learn cooperative behavior between robots and to conduct robotic research. However, during our interview, this SPAWAR Systems Center employee stated that at the time of these purchases his division did not have any new professionals scheduled to rotate through his assignment. Within 6 weeks of purchasing the kits, the employee removed all four from SPAWAR Systems Center, brought two of them to a local elementary school where he mentors an after school science club, and brought two to his home. We believe this purchase is abusive because there does not appear to be a valid government need for the purchase, and because the cardholder effectively gave away government property that could easily be converted to personal use. As part of the NAVSUP mandated stand- down transaction review, SPAWAR Systems Center also reviewed the transactions for the Lego robot kits and initially questioned their propriety. However, contrary to our conclusion that these purchases were abusive, SPAWAR Systems Center ultimately considered the Lego kits to be a valid government purchase. In our November 30, 2001, report on the purchase card controls at SPAWAR Systems Center and NPWC, we recommended that action be taken to help ensure that cardholders adhere to applicable purchase card laws, regulations, internal control and accounting standards, and policies and procedures. Specifically, we recommended that the commander, Naval Supply Systems Command, revise NAVSUP Instruction 4200.94 to include specific consequences for noncompliance with purchase card policies and procedures. DOD did not concur with that recommendation and stated that existing Navy policy clearly identifies consequences for fraud, abuse, and misuse. We continue to believe that Navy needs to establish specific consequences for these purchase card problems because the Navy policy does not identify any specific consequences for failure to follow control requirements. Enforcement of the consequences is also critical. For example, only one of the cardholders referred to in this testimony or our July 30, 2001, testimony had formal disciplinary or administrative action— in the form of removal of the purchase card—taken against them. Thus, we reiterate our previous recommendation that the Navy enforce purchase card controls by establishing specific formal disciplinary and/or administrative consequences—such as withdrawal of cardholder status, reprimand, suspension from employment for several days, and, if necessary, firing. Unless cardholders and approving officials are held accountable for following key internals controls, the Navy is likely to continue to experience the types of fraudulent, improper, and abusive and questionable transactions identified in our work. The weaknesses identified in the purchase card program at these two Navy units are emblematic of broader financial management and business process reform issues across DOD. The comptroller general testified on March 6, 2002, before the Subcommittee on Readiness and Management Support, Senate Committee on Armed Services, on the major challenges facing the department in its business process transformation efforts. In light of the events of September 11, and the federal government’s short- and long-term budget challenges, it is more important than ever that DOD get the most from every dollar spent. As Secretary Rumsfeld has noted, billions of dollars of resources could be freed up for national defense priorities by eliminating waste and inefficiencies in existing DOD business processes. The cultural issues we identified at SPAWAR Systems Center— such as the failure to acknowledge significant control weaknesses in the purchase card program, the parochial approach to program management without regard to broader Navy and DOD initiatives, and the lack of consequences on a personal or organizational level for failure to adhere to controls—are a major impediment to the improvements that are needed to stop wasteful and abusive purchases and ensure that taxpayer dollars are spent where national priorities dictate. In response to requests from this Subcommittee and Senator Grassley, we have ongoing audits related to the purchase and travel card programs at the Army, Navy, and Air Force and plan to offer additional recommendations to strengthen the controls over these programs. For future contacts regarding this testimony, please contact Gregory D. Kutz at (202) 512-9095 or [email protected] or John J. Ryan at (202) 512-9587 or [email protected]. Individuals who made key contributions to this testimony include Beatrice Alff, Cindy Brown-Barnes, Bertram Berlin, Sharon Byrd, Lee Carroll, Douglas Delacruz, Francine DelVecchio, Stephen Donahue, Douglas Ferry, Kenneth Hill, Jeffrey Jacobson, Kristi Karls, John Kelly, Yola Lewis, Stephen Lipscomb, Scott McNulty, Sidney Schwartz, and Jenniffer Wilson. The Navy’s purchase card program is part of the Governmentwide Commercial Purchase Card Program, which was established to streamline federal agency acquisition processes by providing a low-cost, efficient vehicle for obtaining goods and services directly from vendors. According to GSA, DOD reported that it used purchase cards for more than 10.7 million transactions, valued at $6.1 billion, during fiscal year 2001. The Navy’s reported purchase card activity—MasterCards issued to civilian and military personnel—totaled about 2.8 million transactions, valued at $1.8 billion, during fiscal year 2001. This represented nearly 30 percent of DOD’s activity for fiscal year 2001. According to unaudited DOD data, SPAWAR Systems Center and NPWC made about $64 million in purchase card acquisitions during fiscal year 2001. Because these two units have cardholders located outside the San Diego area, we limited our testing to only those SPAWAR Systems Center and NPWC cardholders who are located in San Diego, California. Those cardholders accounted for about $50 million of SPAWAR Systems Center and NPWC’s $64 million in purchase card transactions. SPAWAR Systems Center and NPWC are both working capital fund activities. SPAWAR Systems Center performs research, engineering, and technical support, and NPWC provides maintenance, construction, and operations support to Navy programs. Both of these Navy programs have locations throughout the United States. Our review focused on the purchase card program at the San Diego units only. For SPAWAR Systems Center, this included SPAWAR Headquarters, which is located in San Diego, and SPAWAR Systems Center San Diego. Under the Federal Acquisition Streamlining Act of 1994, the Defense Federal Acquisition Regulation Supplement guidelines, eligible purchases include (1) micropurchases (transactions up to $2,500, for which competitive bids are not needed); (2) purchases for training services up to $25,000; and (3) payment for items costing over $2,500 that are on the General Services Administration’s (GSA) preapproved schedule, including items on requirements contracts. The streamlined acquisition threshold for such contract payments is $100,000 per transaction. Accordingly, cardholders may have single-transaction purchase limits of $2,500 or $25,000, and a few cardholders may have transaction limits of up to $100,000 or more. Under the GSA blanket contract, the Navy has contracted with Citibank for its purchase card services, while the Army and the Air Force have contracted with U.S. Bank. The Federal Acquisition Regulation, Part 13, “Simplified Acquisition Procedures,” establishes criteria for using purchase cards to place orders and make payments. U.S. Treasury regulations issued pursuant to provisions of law in 31 U.S.C. 3321, 3322, 3325, 3327, and 3335, govern purchase card payment certification, processing, and disbursement. DOD’s Purchase Card Joint Program Management Office, which is in the office of the assistant secretary of the army for acquisition, logistics, and technology, has established departmentwide policies and procedures governing the use of purchase cards. The NAVSUP is responsible for the overall management of the Navy’s purchase card program, and has published the NAVSUP Instruction 4200.94, Department of the Navy Policies and Procedures for Implementing the Governmentwide Purchase Card Program. Under the NAVSUP Instruction, each Navy Command’s head contracting officer authorizes purchase card program coordinators in local Navy units to obtain purchase cards and establish credit limits. The program coordinators are responsible for administering the purchase card program within their designated span of control and serve as the communication link between Navy units and the purchase card issuing bank. The other key personnel in the purchase card program are the approving officials and the cardholders. If operating effectively, the approving official is responsible for ensuring that all purchases made by the cardholders within his or her cognizance were appropriate and that the charges are accurate. The approving official is supposed to resolve all questionable purchases with the cardholder before certifying the bill for payment. In the event an unauthorized purchase is detected, the approving official is supposed to notify the agency program coordinator and other appropriate personnel within the command in accordance with the command procedures. After reviewing the monthly statement, the approving official is to certify the monthly invoice and send it to the Defense Finance and Accounting Service for payment. A purchase card holder is a Navy employee who has been issued a purchase card. The purchase card bears the cardholder’s name and the account number that has been assigned to the individual. The cardholder is expected to safeguard the purchase card as if it were cash. When a supervisor requests that a staff member receive a purchase card, the agency program coordinator is to first provide training on purchase card policies and procedures and then establish a credit limit and issue a purchase card to the staff member. Purchase card holders are delegated limited contracting officer ordering responsibilities, but they do not negotiate or manage contracts. SPAWAR Systems Center and NPWC cardholders use purchase cards to order goods and services for their units as well as their customers. Cardholders may pick up items ordered directly from the vendor or request that items be shipped directly to end users (requesters). Upon receipt of items acquired by purchase cards, cardholders are to record the transaction in their purchase log and obtain documented independent confirmation from the end user, their supervisor, or another individual that the items have been received and accepted by the government. They are also to notify the property book officer of accountable items received so that these items can be recorded in the accountable property records. The purchase card payment process begins with receipt of the monthly purchase card billing statements. Section 933 of the National Defense Authorization Act for Fiscal Year 2000, Public Law 106-65, requires DOD to issue regulations that ensure that purchase card holders and each official with authority to authorize expenditures charged to the purchase card reconcile charges with receipts and other supporting documentation before paying the monthly purchase card statement. NAVSUP Instruction 4200.94 states that upon receipt of the individual cardholder statement, the cardholder has 5 days to reconcile the transactions appearing on the statement by verifying their accuracy to the transactions appearing on the statement and notify the approving official in writing of any discrepancies in the statement. In addition, under the NAVSUP Instruction, before the credit card bill is paid the approving official is responsible for (1) ensuring that all purchases made by the cardholders within his or her cognizance are appropriate and that the charges are accurate and (2) the timely certification of the monthly summary statement for payment by the Defense Finance and Accounting Service (DFAS). The Instruction further states that within 5 days of receipt, the approving official must review and certify for payment the monthly billing statement, which is a summary invoice of all transactions of the cardholders under the approving official’s purview. The approving official is to presume that all transactions on the monthly statements are proper unless notified in writing by the purchase card holder. However, the presumption does not relieve the approving official from reviewing for blatantly improper purchase card transactions and taking the appropriate action prior to certifying the invoice for payment. In addition, the approving official is to forward disputed charge forms to the unit’s comptroller’s office for submission to Citibank for credit. Under the Navy’s contract, Citibank allows the Navy up to 60 days after the statement date to dispute invalid transactions and request a credit. In our November 30, 2001, report we recommended that the Navy modify its payment certification policy to require (1) cardholders to notify approving officials prior to payment that purchase card statements have been reconciled to supporting documentation, (2) approving officials to certify monthly statements only after reviewing them for potentially fraudulent improper and abusive transactions, and (3) approving officials to verify, on a sample basis, supporting documentation for various cardholder transactions prior to certifying monthly statements for payment. DOD concurred with this recommendation and stated the Navy would modify its payment certification procedures; however, as of February 26, 2002, Navy had not yet issued those changes to its procedures. Upon receipt of the certified monthly purchase card summary statement, a DFAS vendor payment clerk is to (1) review the statement and supporting documents to confirm that the prompt-payment certification form has been properly completed and (2) subject it to automated and manual validations. DFAS effectively serves as a payment processing service and relies on the approving-official certification of the monthly payment as support to make the payment. The DFAS vendor payment system then batches all of the certified purchase card payments for that day and generates a tape for a single payment to Citibank by electronic funds transfer. Figure 1 illustrates the current design of the purchase card payment process for SPAWAR Systems Center and NPWC. We reviewed purchase card controls for two Navy units based in San Diego, SPAWAR Systems Center and NPWC, and assessed changes that these two units made to their control environment since we notified the units of the problems with their respective purchase card programs in early June 2001. In addition we followed up on the status of fraud cases that we reported on in July 2001, and any other fraud cases we identified as part of this follow- up audit. Specifically, our assessment of SPAWAR Systems Center and the NPWC purchase card controls covered the overall management control environment, including (1) span of control issues related to the number of cardholders, (2) training for cardholders and accountable officers, (3) monitoring and audit of purchase card activity, and (4) management’s attitude in establishing the needed controls, or “tone at the top;” tests of statistical samples of key controls over fourth quarter fiscal year 2001 purchase card transactions, including (1) documentation of independent confirmation that items or services paid for with the purchase card were received and (2) proper certification of the monthly purchase card statement for payment; to the extent feasible, substantive tests of accountable items in our sample transactions to verify whether they were recorded in property records and whether they could be found; data mining of the universe of fiscal year 2001 transactions to identify any potentially fraudulent, improper, and abusive or questionable transactions; analysis and audit work related to invoices and other information obtained from three vendors—Cabela’s, REI, and Franklin Covey—from which, based on interviews with cardholders and our review of other transactions, we had reason to believe that SPAWAR Systems Center had made significant improper and abusive or questionable purchases during fiscal year 2001; and analysis of the universe of fourth-quarter fiscal year 2001 purchase card transactions to identify purchases that were split into one or more transactions to avoid micropurchase thresholds or other spending limits. In addition, our Office of Special Investigations worked with DOD’s criminal investigative agencies, Citibank, and credit card industry representatives to identify known and potentially fraudulent purchase card scams. Our Office of Special Investigations also investigated potentially fraudulent or abusive purchase card transactions that we identified while analyzing SPAWAR Systems Center and NPWC fiscal year 2001 purchase card transactions. We used as our primary criteria applicable laws and regulations; our Standards for Internal Control in the Federal Government; and our Guide for Evaluating and Testing Controls Over Sensitive Payments. To assess the management control environment, we applied the fundamental concepts and standards in the GAO internal control standards to the practices followed by management in the four areas reviewed. To test controls, we used a two-step sampling process for purchase card transactions that were recorded by Navy during the fourth quarter of fiscal year 2001. At SPAWAR Systems Center, we selected stratified random probability samples of 50 purchase card transactions from a population of 7,267 transactions totaling $5,919,635. Because the majority of SPAWAR Systems Center transactions failed the control test we did not have to expand our sampling size. At NPWC, we initially selected a sample of 50 purchase card transactions from a population of 11,021 transactions totaling $6,030,501. In light of NPWC’s improvements in the area of documenting independent receipt and acceptance, we increased our sample size of NPWC transactions to 94 to generate a more accurate assessment of the control failure rate at NPWC. We stratified the each of the samples into two groups—transactions from vendors likely to represent purchases of computer equipment and other vendors. With this statistically valid probability sample, each transaction in the population had a nonzero probability of being included, and that probability could be computed for any transaction. Each sample element was subsequently weighted in the analysis to account statistically for all the transactions in the population, including those that were not selected. Table 9 presents our test results on three key transaction-level controls and shows the confidence intervals for the estimates for the universes of fiscal year 2001 purchase card transactions made by SPAWAR Systems Center and NPWC. In addition to selecting statistical samples of SPAWAR Systems Center and NPWC transactions to test specific internal controls, we also made nonrepresentative selections of SPAWAR Systems Center and NPWC transactions based on data mining of fiscal year 2001 transactions. The purpose of the data mining procedures was twofold. Specifically, we conducted separate analysis of acquisitions that were (1) potentially fraudulent, improper, and abusive or otherwise questionable and (2) split into multiple transactions to circumvent either the micropurchase or cardholder transaction thresholds. Our data mining for potentially fraudulent, improper, and abusive or questionable transactions was limited to cardholders who worked in San Diego and covered 36,216 fiscal year 2001 transactions totaling about $26.1 million at SPAWAR Systems Center, and 46,709 fiscal year transactions totaling about $23.9 million at the NPWC. For this review, we scanned the two units’ San Diego-based activities for transactions with vendors that are likely to sell goods or services (1) on NAVSUP’s list of prohibited items, (2) that are personal items, and (3) that are otherwise questionable. Our expectation was that transactions with certain vendors had a more likely chance of being fraudulent, improper, abusive, or questionable. Because of the large number of transactions that met these criteria we did not look at all potential abuses of the purchase card. Rather, we made nonrepresentative selections of transactions based on transactions with the vendors who fit these criteria. For example, we reviewed, and in some cases made inquires, concerning 162 transactions and other related transactions on the same monthly purchase card statement with vendors that sold such items as sporting goods, groceries, luggage, flowers, and clothing. While we identified some improper and potentially fraudulent and abusive transactions, our work was not designed to identify, and we cannot determine, the extent of fraudulent, improper, and abusive or questionable transactions. Our data mining also included nonrepresentative selections of acquisitions that SPAWAR Systems Center and NPWC entered into during the period June 22, 2001, through September 21, 2001, that were potentially split into multiple transactions to circumvent either the micropurchase competition requirements or cardholder single transaction thresholds. We limited our data mining to this period because senior SPAWAR Systems Center and NPWC officials acknowledged to us in early June 2001 that cardholders had made split transactions and that they would attempt to correct the problem. Therefore, to allow the two units an opportunity to resolve this issue, we limited our review to transactions that occurred subsequent to SPAWAR Systems Center and NPWC’s acknowledging a problem with splitting purchases. We briefed DOD managers, including officials in DOD’s Purchase Card Joint Program Management Office, and Navy managers, including NAVSUP, SPAWAR Systems Center, and NPWC officials, on the details of our review, including our objectives, scope, and methodology and our findings and conclusions. Where appropriate, we incorporated their comments into this testimony. We conducted our audit work from November 2001 through February 2002 in accordance with U.S. generally accepted government auditing standards, and we performed our investigative work in accordance with standards prescribed by the President’s Council on Integrity and Efficiency.
This testimony discusses GAO's follow-up on the audit of key internal controls over purchase card activity at two Navy units based in San Diego--the Space and Naval Warfare Systems Command (SPAWAR) Systems Center and the Navy Public Works Center (NPWC). A breakdown in internal controls over $68 million purchase card transactions in fiscal year 2000 left these two units vulnerable to fraudulent, improper, and abusive purchases and to theft and misuse of government property. Although both units improved the overall control environment, including reducing the number of cardholders, increasing the number of approving officials, and decreased purchase card usage, serious weaknesses persisted in three key control environment areas. First, SPAWAR Systems Center needs to ensure that all cardholders receive required training and that this training is documented. Second, SPAWAR Systems Center needs to more carefully implement internal review and oversight activities, which have been ineffective. Third, GAO identified a significant impairment of management "tone at the top" at SPAWAR Systems Center during the last quarter of fiscal year 2001. The two basic internal controls over the purchase card program that GAO tested remained ineffective during the last quarter of fiscal year 2001 at both units. These weaknesses contributed to additional fraudulent, improper, abusive, or otherwise questionable purchases. GAO also identified purchases by SPAWAR Systems Center cardholders that were either excessively expensive or for questionable government needs.
Anthrax is an acute infectious disease caused by the spore-forming bacterium called Bacillus anthracis. The bacterium is commonly found in the soil, and its spores can remain dormant for many years. Although anthrax can infect humans, it occurs most commonly in plant-eating animals. Human anthrax infections have usually resulted from occupational exposure to infected animals or contaminated animal products, such as wool, hides, or hair. Both human and animal anthrax infections are rare in the United States. Anthrax infection can take one of three forms: cutaneous, usually through a cut or an abrasion; gastrointestinal, usually by ingesting undercooked contaminated meat; or inhalational, by breathing airborne anthrax spores into the lungs. After the spores enter the body through any of these routes, they germinate into bacteria, which then multiply and secrete toxins that can produce local swelling and tissue death. The symptoms are different for each form and usually occur within 7 days of exposure. Depending on the extent of exposure and its form, a person can be exposed to Bacillus anthracis without developing an infection. There are several methods for detecting anthrax spores or the disease itself, for example, nasal swabs for exposure to spores, blood tests for infections, and wet swabs for environmental contamination. CDC does not recommend the use of the nasal swab test to determine whether an individual should be treated, primarily because a negative result (no spores detected) does not exclude the possibility of exposure. Confirmation of anthrax infection or the presence of anthrax spores can require more than one type of test. The disease can be treated with a variety of antimicrobial medications and is not contagious. With proper treatment, fatalities are rare for cutaneous anthrax. For gastrointestinal anthrax, between 25 and 60 percent of cases have resulted in death. For inhalational anthrax, the fatality rate before the 2001 incidents had been approximately 75 percent, even with appropriate antimicrobial medications. An anthrax vaccine is available, but it is indicated for use in individuals at high risk of exposure to anthrax spores, such as laboratory personnel who work with Bacillus anthracis. Because so few instances of inhalational anthrax have occurred, scientific understanding about the number of spores needed to cause infection is still evolving. Before the 2001 incidents, it was estimated that a person would need to inhale thousands of spores to develop an infection. However, based on some of the cases that occurred during the anthrax incidents, experts now believe that the number of spores needed to cause inhalational anthrax could be fewer than that, depending on a person’s health and the nature of the spores. In the existing model for response to a public health emergency of any type, including a bioterrorist attack, the initial response is generally a local responsibility. This local response can involve multiple jurisdictions in a region, with states providing additional support as needed. Having the necessary resources immediately available at the local level to respond to an emergency can minimize the magnitude of the event and the cost of remediation. In the case of a covert release of a biological agent such as anthrax, it can be days before exposed people start exhibiting signs and symptoms of the disease. The model anticipates that exposed individuals would seek out local clinicians, such as private physicians or medical staff in hospital emergency departments or public clinics. Clinicians would report any illness patterns or diagnostic clues that might indicate an unusual infectious disease outbreak to their state or local health departments. Local and state health departments would collect and monitor data, such as reports from clinicians, for disease trends and evidence of an outbreak. Environmental and clinical samples would be collected for laboratorians to test for possible exposures and identification of illnesses. Epidemiologists in the health departments would use the disease surveillance systems to provide for the ongoing collection, analysis, and dissemination of data to identify unusual patterns of disease. Public health officials would provide needed information to the clinical community, other responders, and the public and would implement control measures to prevent additional cases from occurring. The federal government can also become involved, as requested, by providing assistance with testing of samples and epidemiologic investigations, providing advice on treatment protocols and other technical information, and coordinating a national response. As early as 1998, CDC had begun its planning efforts to enhance its capacity to respond effectively to bioterrorism. CDC said it was responsible for providing national leadership in the public health and medical communities in a concerted effort to detect, diagnose, respond to, and prevent illnesses that occur as a result of bioterrorism. In its strategic preparedness and response plan, CDC anticipated that it would need to collaborate with local and state public health partners and other federal agencies in order to strengthen components of the public health infrastructure. As part of this collaboration, CDC initiated a cooperative agreement program in 1999 to enhance state and local bioterrorism preparedness. CDC’s planning efforts identified the importance of coordination with the Department of Justice, including the FBI and the National Domestic Preparedness Office. In addition, CDC said that there was ongoing coordination with the Office of Emergency Preparedness within HHS, FDA, NIH, DOD, the Federal Emergency Management Agency (FEMA), and many other partners, including academic institutions and professional organizations. At the time of the anthrax incidents, some of these collaborative efforts were in the planning stage, some were in the form of working groups, and others were limited in scope to areas such as laboratory preparedness, training, or new vaccine research. CDC was also working to make improvements in various aspects of preparedness and prevention, detection and surveillance, and communication and coordination. At the time of the anthrax incidents, CDC was working on creating diagnostic and epidemiologic performance standards for local and state health departments. In collaboration with NIH and DOD, CDC was encouraging research for the development of new vaccines, antitoxins, and innovative drugs. In addition, CDC had developed a repository of pharmaceuticals and other supplies through the Strategic National Stockpile. CDC was developing educational materials and providing terrorism-related training to epidemiologists, laboratory workers, emergency responders, emergency department personnel, and other front-line health care providers and health and safety personnel. Through cooperative agreements, CDC was also working to upgrade the surveillance systems of the local and state health departments and investing in the Health Alert Network (HAN) and Epidemic Information Exchange (Epi-X) communication systems. In October 2001, an employee of American Media Inc. (AMI) in Florida was diagnosed with inhalational anthrax, the first case in the United States in over two decades. By the end of November 2001, 21 more people had contracted the disease, and 5 people, including the original victim, had died as a result. Although the FBI confirmed the existence of only four letters containing anthrax spores, by December 2001 the Environmental Protection Agency (EPA) had confirmed that over 60 sites, about one third of which were U.S. postal facilities, had been contaminated with anthrax spores. The cases of inhalational anthrax in Florida, the first epicenter, were thought to have resulted from proximity to opened letters containing anthrax spores, which were never found. (See table 1.) The initial cases of anthrax detected in New York, the second epicenter, were all cutaneous and were also thought to have been associated with opened anthrax letters. The cases detected initially in New Jersey, the third epicenter, were cutaneous and were in postal workers who presumably had not been exposed to opened anthrax letters. Unlike the incidents at other epicenters, which began when cases of anthrax were detected, the incident on Capitol Hill, the fourth epicenter, began with the opening of a letter containing anthrax spores and resulting exposure. The discovery of inhalational anthrax in a postal worker in the Washington, D.C., regional area, the fifth epicenter, revealed that even individuals who had been exposed only to sealed anthrax letters could contract the inhalational form of the disease. Subsequent inhalational cases in Washington, D.C., New Jersey, New York, and Connecticut, the sixth epicenter, underscored that finding. (For a list of key events in the history of the anthrax incidents and the public health response to the incidents, see app. I.) Although the anthrax incidents were limited to six epicenters on the East Coast, the incidents had national implications. Because mail processed at contaminated postal facilities could be cross-contaminated and end up anywhere in the country, the localized incidents generated concern about white powders found in locations beyond the epicenters and created a demand throughout the nation for public health resources at the local, state, and federal levels. Local and state public health officials across the epicenters emphasized the benefits of their planning efforts for promoting a rapid and coordinated response, stressed the importance of effective communication throughout the incidents, and reported that their response capacity was strained and the response would have been difficult to sustain if the incidents had been more extensive. Local and state public health officials were challenged to coordinate their responses to the anthrax incidents across a wide range of public and private entities, often across more than one local jurisdiction. Officials reported that anticipating local needs in emergency response plans, making those plans operational with formal contracts and agreements, and having experience with other public emergencies or large events improved their ability to mount a rapid and coordinated response. When pieces of this planning process were missing, had not been operationalized, or had not been tested by experience, coordination of the local response was often more difficult. Local and state public health officials reported that they had typically planned for coordination of their emergency response but had not fully anticipated the extent to which they would have to coordinate with a wide range of both public and private entities involved in the response to the anthrax incidents, both locally and in other jurisdictions. Among others, public health departments had to coordinate their responses with those of local and federal law enforcement, emergency responders, the postal community, environmental agencies, and clinicians. Most response plans anticipated the need for public health officials to coordinate with law enforcement and emergency response officials, both within their community and across jurisdictions. In one epicenter, for example, a regional organization of local governments had developed planning guidance that outlined collaborative networks between the public health and emergency response communities needed to strengthen the region’s response to an event such as the anthrax incidents. In contrast, the need to link the public health response with the responses of other public entities affected by the anthrax incidents, such as environmental agencies, military response teams, and the U.S. Postal Service, was less likely to have been anticipated in local response plans. During the response, standard practices for clinical and environmental testing and use of proper protective clothing and equipment needed to be coordinated among public health officials, postal officials, police, firefighters, environmental specialists, and teams from DOD. However, officials reported that in some cases personnel from environmental and military groups were meeting with public health officials for the first time as the response unfolded. When the need for consistency in testing procedures and standards for protective clothing and equipment had not been anticipated, officials sometimes had difficulty agreeing on which procedures and standards to follow. In addition, some plans had not anticipated the need to forge quick relationships between public health departments and local groups affected by the incidents but not expressly mentioned in the plans. During the anthrax incidents, the absence of such a measure proved to be a particular problem for postal officials and postal union representatives. In part due to this absence of proactive plans, coordination between public health and postal officials on many of the details of the response was problematic, and there were difficulties communicating critical information, such as decisions on how and when to provide prophylactic, or preventive, treatment to postal workers. The need for coordination between public health and private groups affected by the emergency—such as the hospital community—was also not always fully anticipated in local response plans. Public health officials in several areas had to work with local hospitals and other facilities to set up screening and postexposure prophylaxis clinics rapidly, sometimes in less than 24 hours. In this time they had to identify an appropriate site location, design patient flow plans, outline staff needs and responsibilities (medical, pharmacy, counseling, administrative, and facilities operation components), and obtain medications (including dealing with the logistics of breaking down and repackaging bulk medications). Few locations had formally addressed all of these issues before the anthrax incidents, but those that had addressed at least some of them reported being able to respond more rapidly. Officials relied on a variety of formal agreements, such as memoranda of understanding and legal contracts, to address the needs identified in their planning documents. These needs included coordination across disciplines and jurisdictions, access to scientific information, and human resources support. Local officials reported that putting agreements and contracts into place to address these needs strengthened their preparedness both by solidifying links with their public and private partners and by helping them identify weaknesses that could be addressed prior to an emergency. When systems had not been put into place to support plans, coordination of response efforts was more difficult. Formal agreements had often been put into place to support coordination among officials within communities and across jurisdictions, but some aspects of plans that were important for coordinating the response had not yet been made operational. For example, one official reported having arranged to link surveillance and environmental health personnel with law enforcement officials during criminal investigations in the event of an anthrax attack. Another official had already established agreements with local counterparts to provide access to prophylaxis. Officials reported that when formal contacts between officials had not been established, coordination with counterparts in their community and other jurisdictions during the incidents often relied on personal relationships. While some public health departments reported having systems in place to ensure ready access to the scientific information needed to make decisions and provide information to the media and the public, many reported that they did not. Officials reported that planning ahead and then taking the necessary steps to compile available scientific information— including what was known about anthrax, procedures for testing exposure to anthrax, treatment protocols, and standards for the types of protective clothing and equipment that are appropriate for first responders—were important for responding rapidly and reducing confusion across the parties involved in the response. Officials stated that during the response they relied on existing mutual aid agreements or contracts that gave them access to staff for screening and mass care clinics, allowed the state to pull local epidemiologists to support the state response, and addressed licensure issues for staff brought in from other states. However, these agreements were not always in place, or only partially covered the needs of the situation, and some officials had to spend time dealing with issues that could have been addressed before the event. For example, an official in one epicenter reported that because a state of emergency had not been declared in the jurisdiction, there was no system to pay for food for staff who were working 24-hour shifts in prophylaxis clinics. Several officials in other localities reported that systems had not been put into place to authorize payment for overtime work in both public health departments and laboratories. In addition, one health department received offers of volunteer help from many physicians, pharmacists, nurses, epidemiologists, and other concerned citizens. However, it could not use the volunteers because it did not have a volunteer management system to train providers and verify credentials. Experience with drills and responding to public health emergencies helped officials identify weaknesses in their plans. These officials stated that drills ranging from tabletop to full-scale exercises were useful for testing coordination and response capacities both locally and regionally. Public health officials also reported that their experience in dealing with hoax letters and false alarms proved useful, particularly in supporting coordination with the law enforcement community. In major metropolitan areas, experience with large events, such as political conventions, forced local public health departments to develop their emergency response plans and put the necessary agreements in place to support those plans. Experience with public health emergencies—including natural disasters and outbreaks of infectious disease such as West Nile virus—also allowed officials to work on coordinating their responses across multiple sites, test their surveillance systems, and establish links with other public and private entities. Where previous experience had not allowed officials to identify and address shortcomings of their plans, the anthrax incidents tended to uncover weaknesses. For example, one local public health official reported that although the agency had planned how to set up a prophylaxis clinic it had not actually exercised getting people through the testing and prophylaxis process. During the anthrax response, it took significantly longer than the agency had anticipated to obtain test results from overwhelmed laboratories. This official said that if the agency had known how long it was going to take to get laboratory results, it would have provided the first doses of prophylaxis for a longer duration to take into account the additional time required to obtain test results. Another official reported that the agency’s experience with setting up a prophylaxis clinic during the anthrax response taught the agency how to select more appropriate sites for mass vaccination or prophylaxis clinics in emergency situations. Experience also revealed shortcomings in regional coordination. Several officials noted that although some plans for coordination across jurisdictions were in place, they had not been exercised, and so the relationships to support coordination had not been formed or tested. Local officials identified communication among responders and with the public during the anthrax incidents as a challenge, both in terms of having the necessary communication channels and in terms of making the necessary information available for distribution. Good communication can minimize an emergency, improve response, and reassure the public. Officials reported that although communication among local responders was generally effective, there were problems in communicating with some hospitals and physicians. They also reported that dealing with the media and communicating messages to the public were also challenging. Communication among local and state response agencies was generally perceived to be effective and helped keep agency officials informed and the public health response coordinated. Channels of communication between public health agencies and other responders—including law enforcement and emergency management agencies, hazardous material units, and neighboring state public health agencies—were already in existence at the time of the anthrax incidents. Regular conference calls, which were initiated during the incidents, were used to distribute information, raise issues, and answer questions. In addition to telephone calls, local and state public health offices relied on fax machines and the Internet to send and receive information during the incidents. Most local health departments, however, noted that they did not have backup communication systems that could be used in case everyday systems became unavailable. In addition, public health workers did not generally have cell phones, pagers, or laptop computers, which could provide the means to keep working if it became necessary to vacate a building during a crisis. In one epicenter, when an agency had to evacuate its quarters during the incidents and workers could not be at their desks, many of its communication systems (in addition to the information stored in the office in electronic formats) became unavailable. Several local agencies that did not have backup systems available at the time of the anthrax incidents told us they have concluded that it is important to invest in such systems to be prepared for any future public health emergencies. Local response agencies generally got the information they requested from other local agencies. For example, in one epicenter, police and fire departments were given specific protocols for handling suspicious samples and triaging them for the laboratory. However, there were instances in which they did not get needed information. For example, a local emergency response official stated that the local fire department did not know what protective equipment (such as masks and gloves) firefighters should wear when responding to a suspected anthrax incident. The fire department turned to the local health department for answers, but the health department took weeks to release the protocol. State and local officials reported difficulty providing needed information to some hospitals and physicians in a timely way, and members of the medical community expressed concern about the timeliness of the information they received. Physicians recognized that they lacked experience with anthrax and were particularly concerned about missing a diagnosis because of its high fatality rate. They expected to be given rapid and specific instructions from public health officials about how to recognize and treat people who had been exposed. They wanted guidelines, for example, on how to diagnose inhalational anthrax and how to advise individuals who worked in post offices. Hospitals in one epicenter reported receiving daily influxes of people with flulike symptoms. Because these hospitals were seeking guidance on how to distinguish between influenza and anthrax symptoms, the hospital association in the area initiated daily conference calls with concerned clinicians. The purpose of these calls was to collect questions to ask other organizations, such as CDC, to coordinate consistent answers to questions from the public, and to share information about clinical approaches. Some of the ways in which local public health agencies tried to communicate with hospitals and physicians were regarded as relatively effective by the agencies, but no method worked well for all targeted recipients. Health departments used various means to make relevant materials available to hospitals and physicians, including sending faxes or e-mail messages, posting relevant information on their Web sites, distributing CD-ROMs, and setting up hotlines. In one state, which had no confirmed anthrax infections but numerous false alarms, the state public health department faxed critical information to hospitals throughout the state. Officials in the department reported that while this system was useful in disseminating information it was insufficient because it did not provide a means of receiving information from the hospitals. E-mail worked well for institutions, but it was an ineffective way of communicating with physicians, especially those who did not have a hospital-based practice. Several local public health officials told us that many private physicians did not have e-mail or Web access. Because electronic messages were not a feasible way of communicating with many clinicians, there was no way to get timely information about anthrax to them. Some primary care physicians were difficult to reach by any mass communication method or even individually because public health officials sometimes did not have up-to-date rosters of their telephone numbers. Officials in one state said they realized during the incidents that they did not have a way to send information directly to dermatologists, a group of specialists who were especially important for detecting the cutaneous form of anthrax infection. Because localities were unable to reach all physicians directly, government agencies relied on physicians and associations who did receive the information to serve as conduits. However, government and association officials agreed that this method did not provide complete coverage of all physicians. Local officials reported that the criminal investigation of the anthrax incidents sometimes hindered their ability to obtain information they needed to conduct their public health response. For example, public health officials in one epicenter said that they were unable to get certain information from the FBI because the local public health officials lacked security clearances. They said that if they had received more detailed information earlier about the nature of the anthrax spores in the envelopes, it might have affected how their agencies were responding. In addition, a laboratory director in one of the epicenters reported that the criminal investigation led to constraints on his ability to communicate laboratory results to clinicians. Just as information was not provided to government agencies because of law enforcement considerations, officials stated that criminal aspects of the incidents complicated the distribution of information to the public. Officials expressed concern about the necessity of withholding some information from the public. One official reported that communication with the public was constrained when the situation became a criminal investigation. She was concerned that information the public needed to understand its risk was no longer being provided. Officials in one epicenter told us that they were concerned that constraints on the ability of local public health departments to communicate could lead to a loss of credibility. More generally, officials reported that fear in the community could have been reduced if they had been able to release more information to the media and the public. Local and state officials reported that although they were generally successful in persuading people to seek treatment, they encountered difficulties in providing needed information to the media and local public during the anthrax incidents. Because the incidents were taking place in many locations, local communications were complicated by the public’s exposure to information about other localities and from the national media. Local and state officials realized that they needed to use the media to disseminate information to the public and that they needed to be responsive to the media so that the information the media were providing was accurate. Public health and other government officials in the epicenters held regular press conferences to keep the public informed about local developments, made officials available to respond to media requests, and developed informational materials so that the media and the public could be better informed. Several officials stated that the media helped in publicizing sources of information such as hotlines and specific information such as details about who should seek treatment and where to go for it. However, media analysts have also noted that the media were sometimes responsible for providing incorrect information. For example, one official said that when the media reported that nasal swabbing was the test for anthrax, individuals sought unnecessary nasal swab testing from emergency rooms, physicians, and the health department, and thereby diverted medical and laboratory resources from medical care that was required elsewhere. Communication with the public was further complicated by the evolving nature of the incidents and the local public’s exposure to information from other localities and the national media. Comparisons of actions taken by officials at different points in time and in different areas caused the public to question the consistency and fairness of actions taken in their locale. For example, the affected public in some epicenters wondered why they were being given doxycycline for prophylaxis instead of ciprofloxacin, which had been heralded in the media as the drug of choice for the prevention of inhalational anthrax and used earlier in other epicenters. CDC’s initial recommendation for ciprofloxacin was made because ciprofloxacin was judged to be most likely to be effective against any naturally occurring strain of anthrax and had already been approved by FDA for use in postexposure prophylaxis for inhalational anthrax. However, when it was determined that doxycycline was equally effective against the strain of anthrax in the letters and following FDA’s announcement that doxycycline was approved for inhalational anthrax, the recommendation was changed. This change was made because of doxycycline’s lower risk for side effects and lower cost and because of concerns that strains of bacteria resistant to ciprofloxacin could emerge if tens of thousands of people were taking it. In epicenters where prophylaxis was initiated after the recommendation had changed, officials followed the new recommendation and gave doxycycline to affected people. Local officials were challenged to explain the switch and address concerns raised by affected groups about apparently differential treatment. One local official described the importance of explaining that the switch was also taking place even in locations that had started with ciprofloxacin. Elements of the local and state public health response systems—including the public health department and laboratory workforce as well as laboratories—were strained by the anthrax incidents to an extent that many local and state officials told us that they might not have been able to manage if the crisis had lasted longer. The anthrax incidents required extended hours for many public health workers investigating the incidents, as well as the assignment of new tasks, including the staffing of hotlines, to some workers. Aside from problems of workforce capacity, some clinical laboratories were not prepared in terms of equipment, supplies, or available laboratory protocols to test for anthrax, and most of them were unprepared for and overwhelmed by the large number of environmental samples they received for testing. The systems experienced these stresses in spite of assistance from CDC and DOD, and temporary transfers of local, and in some cases regional, resources. During the anthrax incidents, the workload increased greatly at local and state health departments and laboratories and across the country. The departments heightened their disease surveillance, investigated false alarms and hoaxes as well as potential threats, tested large numbers of samples, and performed other duties such as answering calls on telephone hotlines that were set up to respond to questions from the public. Health departments across the nation received thousands of such calls. For example, officials at one location told us that they received 25,000 calls over a 2-week period during the crisis. Nine states—Colorado, Connecticut, Louisiana, Maryland, Montana, North Dakota, Tennessee, Wisconsin, and Wyoming—reported to CDC that during the week of October 21 to 27, 2001, they received a total of 2,817 bioterrorism-related calls. These nine states also reported that during that week they conducted approximately 25 investigations per state and had from 8 to 30 state personnel engaged full-time in the responses in each state. Some local and state health departments had to borrow workers from other parts of their agencies or from outside of their agencies, such as from CDC and DOD, to meet the greater demands for surveillance, investigation, laboratory testing, and other duties related to the incidents. Several agencies realized that they lacked staff in particular specialties, such as environmental epidemiology. Some state public health departments did not have enough epidemiologists to investigate the suspected cases in their localities and had to borrow staff from other programs. Health workers were pulled from other jobs to work in the field or to staff the telephone hotlines. Staff borrowed from other parts of the agency were sometimes unable to fulfill their traditional public health duties, such as working on prevention of sexually transmitted diseases, and some routine work was delayed. In spite of the borrowing, staff at some agencies worked long hours over a number of weeks. In some cases, state laboratories had to borrow staff from various parts of their health department because laboratory workers were overwhelmed and the laboratories required staffing for 24 hours a day, 7 days a week. In some locations, CDC provided epidemiologists and laboratorians to help fill gaps in staff. Some borrowed workers had to be trained for their new duties while the incidents were ongoing. Some workers had to be trained or cross-trained in two fields, requiring additional time from other staff and resources from the department. Some borrowed staff had to be trained for the specific tasks required by the incidents. Finding sufficient numbers of people who were appropriately trained or could be efficiently trained to staff the telephone hotlines effectively was also a challenge. Local officials reported that even if sufficient staff were found, calls were not always handled effectively, especially when the caller needed mental health services. Many officials we interviewed were concerned about their ability to deal with demand on staff in future crises. Since the anthrax incidents, some states have sent members of their staff for additional training. Some officials emphasized that surge capacity should be flexible to ensure preparedness for various types of future bioterrorism incidents. In addition to overwhelming the laboratory workforce, the large influx of samples strained the physical capacity of the laboratories. Public health laboratories around the country tested thousands of white powders and other environmental samples as well as clinical samples. According to CDC, during the anthrax incidents, laboratories within the Laboratory Response Network tested more than 120,000 samples, the bulk of which were environmental samples. Officials from one state told us that its laboratories did not have the capacity to handle the volume of work they received. Some local and state public health laboratories could not analyze anthrax samples because of limitations of equipment, supplies, or laboratory protocols. For example, in some states there were a limited number of biological safety cabinets, which were needed to prevent inhalation of anthrax spores by laboratory workers during the testing of samples. Some laboratories did not have the chemicals needed to conduct the appropriate tests. In some states, none of the state laboratories could conduct an essential diagnostic test for anthrax, the polymerase chain reaction test. In another state, only one of three state laboratories could perform this test. Some state and local laboratories were not prepared to take the safety precautions required to test samples for anthrax. Local laboratories were even less capable of doing anthrax testing. Samples for confirmatory testing were sent to CDC or to DOD’s USAMRIID. In addition to performing confirmatory testing, DOD also provided other laboratory support to state and local officials. For example, the samples from one epicenter were sent to DOD, and the department sent mobile laboratories to two other epicenters to assist with testing samples. Moreover, although some laboratories were relatively well prepared to test clinical samples, they were not expecting the hundreds of environmental samples they received and did not have protocols prepared for testing them. It was the volume of these environmental samples, rather than the volume of the clinical samples, that overwhelmed the laboratories. Among the environmental samples, there were white powder samples that arrived without any assessment by law enforcement as to the level of threat they posed. At least one state laboratory developed protocols so that law enforcement personnel could triage samples, thereby increasing the likelihood that only those samples with a relatively high threat level would be forwarded to the laboratory for further testing. Even where protocols for testing these samples were available, it was a time-consuming and unfamiliar task for the laboratory to label them, track their progress, and ensure that their results were reported to the appropriate authority. CDC led the federal public health response to the anthrax incidents, and the experience showed aspects of federal preparedness that could be improved. During the anthrax incidents, CDC was designated to act on behalf of HHS in providing national leadership in the public health and medical communities. As the lead agency in the federal public health response, CDC had to not only provide public health expertise but also manage the public health response efforts across epicenters and among other federal agencies. While local and state officials reported that CDC’s support of their responses to the rapidly unfolding anthrax incidents at the local and state levels was generally effective, CDC acknowledged that it was not fully prepared for the challenge of coordinating the public health response across the federal agencies. CDC experienced difficulty serving as the focal point for communicating critical information during the response. In addition to straining CDC’s resources, the anthrax incidents highlighted shortcomings in the clinical tools available for responding to anthrax, such as vaccines and drugs, and a lack of training for clinicians on how to recognize and respond to anthrax. CDC effectively responded to heavy resource demands from state and local officials to support the local responses. CDC reported that its support activities included surveillance; clinical, epidemiologic, and environmental investigation; laboratory work; communications; coordination with law enforcement; medical management; administration of prophylaxis; monitoring of adverse events; and decontamination. As new epicenters became involved, CDC dispersed additional agency staff to assist local and state health departments and other groups playing a role in the response efforts, eventually deploying more than 350 employees to the six epicenters. In addition, because even the perception of danger required a public health response, CDC also provided assistance as requested in localities beyond the epicenters. From October 8 to 31, 2001, CDC’s emergency response center received 8,860 telephone inquiries from all 50 states, the District of Columbia, Puerto Rico, Guam, and 22 foreign countries. CDC’s callers included health care workers, local and state health departments, the public, and police, fire, and emergency departments and included requests for information about anthrax vaccines, bioterrorism prevention, and the use of personal protective equipment. Thus CDC not only provided resources to the epicenters but also had to coordinate local efforts nationwide. Local public health offices required varying levels of assistance from CDC. For example, in one epicenter local officials looked to CDC to lead the epidemiologic investigation and relied primarily on CDC staff. In contrast, local officials in another epicenter led the local disease outbreak investigation and control effort and CDC staff supplemented a large local team. In most of the epicenters, the team sent by CDC included Epidemic Intelligence Service (EIS) officers, who are specially trained epidemiologists, to help with the investigation. The team’s epidemiologic investigation used the traditional two-pronged approach in which it completely investigated either the case or the circumstance of a confirmed exposure and conducted intensive surveillance to identify any other anthrax cases or exposures. Laboratory testing proved to be an important tool in the epidemiologic investigation, and the CDC team also included laboratorians, who assisted with laboratory testing. In one epicenter, CDC also sent one of its anthrax experts to provide guidance and assist the local and state officials. In addition to playing its traditional role of supporting local and state public health departments, CDC also was confronted with the challenge of coordinating the public health activities of multiple federal agencies involved in the response, a task for which it acknowledged it was not wholly prepared. CDC described having to create an ad hoc emergency response center in an auditorium from which to manage the federal public health response, which involved numerous agencies. These included FDA, which, among other activities, provided guidance on treatment and addressed drug and blood safety issues. In addition, NIH provided scientific expertise on anthrax. CDC also coordinated with federal agencies working on the environmental and law enforcement aspects of the response efforts. DOD was responsible for testing all of the anthrax letters that were recovered and was involved in the transportation and testing of environmental samples as well as the cleanup of contaminated buildings. EPA was in charge of the cleanup of contaminated sites. FEMA assisted the President’s Office of Homeland Security in establishing and supporting an emergency support team. The FBI led the criminal investigation. Although CDC’s planning efforts prior to the anthrax incidents had identified the importance of coordination with other federal agencies for an effective response to bioterrorism, and CDC had developed some working groups among federal agencies, CDC sometimes had to adjust its response as events unfolded to facilitate coordination of more practical issues such as conducting simultaneous investigations in the field. For example, CDC told us that in one epicenter both CDC and the FBI, which needed to collect samples for the forensic investigation, identified the need to gain a better understanding of one another’s work. During the incidents, CDC provided a liaison to the FBI, and the agencies worked together to collect laboratory samples. Since the anthrax incidents, CDC has held joint training with the FBI to discuss what they learned from their experience that could facilitate working together in the future. CDC has made several efforts to improve coordination since the anthrax incidents, including major structural changes within the agency, creation of a permanent emergency operations center (EOC), and increased collaborative efforts with others within and outside of HHS. Officials point to the creation of the Office of Terrorism Preparedness and Emergency Response, which is part of the Office of the Director, as a major change. The primary services of this office are to provide strategic direction for CDC to support terrorism preparedness and response efforts, secure and position resources to support activities, and ensure that systems are in place to monitor performance and manage accountability. The office manages the cooperative agreement program to enhance local and state preparedness and jointly manages the Strategic National Stockpile with the Department of Homeland Security. The office also manages the EOC, which was created to promote quicker and better-coordinated responses to public health emergencies across the country and around the globe. The EOC is staffed 24 hours a day, 7 days a week, and the staff includes officials from FEMA, DOD, and other agencies. CDC also created a permanent position of CDC liaison to the FBI to increase collaboration with that agency. CDC served as the focal point for information flow during the anthrax incidents, but experienced some difficulty in fulfilling that role. In addition to the varied responsibilities involved in leading the public health response, the agency concurrently had to collect and analyze the large amount of incoming information on the anthrax incidents, assemble and analyze the available scientific information on anthrax, and produce guidance and other information based on its analyses for dissemination to officials, other responders, the media, and the public. CDC officials reported that the agency had difficulty producing and disseminating this guidance rapidly as well as difficulty conveying information to the media and the public. CDC officials acknowledged that the agency was not always able to produce guidance as quickly as it would have liked. When the incidents began, it did not have a nationwide list of outside experts on anthrax, and it had not compiled all of the relevant scientific literature on anthrax. Consequently, CDC had to do time-consuming research to gather background information to inform its decisions, which slowed the development of its guidance. CDC has since compiled background information and lists of experts not only for anthrax but also for the other biological agents identified as having the greatest potential for adverse public health impact with mass casualties in a terrorist attack, and it has made the background information available on its Web site. CDC officials reported that CDC also had difficulty compiling the information it received during the incidents. Although CDC’s role as focal point for information was a familiar one, the magnitude of information it received was unusual. CDC received a tremendous amount of information via e-mail, phone, fax, and news media reports from such sources as the agencies and organizations in the epicenters of the incidents, public health departments not in the epicenters, other federal agencies, and international public health organizations. CDC also received information from its staff in the field, but encountered some problems in those communications. Agency officials have said there were communication problems between epidemiologic staff in the field and at headquarters, which CDC attempted to address by holding “mission briefings” through its emergency response center; however, these briefings were not conducted regularly. CDC’s efforts to manage all of this incoming information and associated internal communication problems were complicated by its concurrent responsibility for coordinating the day-to- day activities involved in the federal public health response to the unfolding incidents. According to CDC, both clinical and environmental guidance was developed during the incidents by using working groups of six to eight employees who were subject matter experts. Keeping up with the influx of new information that was being acquired daily proved to be a challenge for these working groups. CDC officials told us that no group at CDC was responsible for collecting and analyzing all of the data that were coming in and that few people at CDC had time to read their e-mail messages during the incidents. Since the incidents, CDC has established teams of scientists from inside and outside CDC whose only role is to review and analyze information during a crisis; CDC does not intend for these teams to be involved in day-to-day response operations. As the working groups incorporated new information into their analyses, the guidance they were producing changed accordingly. For example, as the epidemiologic investigation expanded, CDC had to revise its assessment of the risk of developing inhalational anthrax from letters containing anthrax spores. Early on, CDC was acting on the theory that there was little risk of contracting inhalational anthrax from sealed letters. The incidents in the Washington, D.C., regional area, the fifth epicenter, represented a turning point in the epidemiologic investigation. The discovery of inhalational anthrax in a postal worker who presumably had been in contact only with sealed anthrax letters required CDC to revise its assessment. From this point on, CDC presumed that any exposure would put an individual at risk and changed its recommendation regarding who should get prophylaxis accordingly. CDC began to recommend prophylaxis for all individuals who had been in contact with sealed as well as unsealed anthrax letters, whereas earlier the agency had not been recommending such treatment unless an individual had been exposed to an opened letter. Initially, CDC relied on the HAN communication system and its Morbidity and Mortality Weekly Report (MMWR) publication to disseminate its guidance and other information; however, during the incidents there were difficulties with both of these methods. At the time of the incidents, all state health departments were connected to the HAN system. However, only 13 states were connected to all of their local health jurisdictions, and therefore HAN messages could not reach many local areas. Some states were satisfied with the information they received via HAN, but others claimed they did not get much information from HAN and what they did get was incomplete. During the incidents, CDC expanded its list of HAN recipients to include additional organizations, including medical associations. MMWR is issued on a weekly basis, and so the information in the latest issue was not always completely up-to-date for incidents that were unfolding by the hour. For example, information published in MMWR on October 26, 2001, contained the notice that the information was current as of October 24, 2001. In addition to these structural barriers to getting information out quickly to those who needed it, CDC’s internal process of clearing information before issuance through HAN or MMWR was time- consuming. CDC has since changed its clearing process so that information can get out faster. The agency also made a number of other changes during the incidents to address some of the difficulties it encountered in providing information to the public health departments and clinicians. These included bringing in professionals from other communication departments in CDC to help get information out quickly, issuing press releases twice a day, and holding telebriefings. Since the incidents, CDC has taken actions to expand its communication capacity, including developing an emergency communication plan, increasing the number of health experts on staff, and establishing a pressroom, in which the Director of CDC gives press briefings on public health efforts. In addition, it has developed, and posted to its Web site, information to assist local and state health officials in detecting and treating individuals infected with agents considered likely to be used in a bioterrorist attack. During the anthrax incidents, the media and the public looked to CDC as the source for health-related information, but CDC was not always able to successfully convey the information that it had. Media analysts and other commentators have asserted that although CDC officials were the most authoritative spokespersons they were not initially the most visible. In an October 2001 nationwide poll, respondents indicated that they considered the Director of CDC and the U.S. Surgeon General to be better sources of reliable information about the outbreak of disease caused by bioterrorism than other federal officials mentioned in the survey. Another problem CDC encountered in its efforts to communicate messages to the public was difficulty in conveying the uncertainty associated with the messages, that is, the caveat that although the messages were based on the best available information, they were subject to change when new facts became known. As a bioterrorist event unfolds and new information is learned, recommendations about who is at risk and how people should be treated may change, and the public needs to be prepared that changes may occur. Local officials and academics have criticized CDC’s communication of uncertainty during the anthrax incidents. CDC officials have acknowledged that they were unsuccessful in clearly communicating their degree of uncertainty as knowledge was evolving during the incidents. For example, although there were internal disagreements at CDC over the appropriate length of prophylaxis, this uncertainty was not effectively conveyed to the public. Consequently, in December 2001, when many people were finishing the 60-day antimicrobial regimen called for in CDC’s guidance, the public questioned CDC’s announcement that patients might want to consider an additional 40 days of antimicrobials. Since the incidents, CDC officials have acknowledged the necessity of expressing uncertainty in terms the public can understand and appending appropriate caveats to the agency’s statements. The anthrax incidents highlighted some of the strengths of the federal public health response capacity, while also reflecting some of its limitations. CDC’s experience with epidemiologic investigations was drawn on extensively and effectively, and the Laboratory Response Network played an important role. Not all the clinical tools that were needed to identify, treat, and prevent anthrax infection were available, and those that were available had shortcomings. Although CDC’s bioterrorism preparedness training program for clinicians had begun at the time of the incidents, most clinicians had not yet been trained to recognize and report anthrax infection. CDC’s skills in disease investigation were heavily relied on during the anthrax incidents. CDC teams worked with local and state public health departments and law enforcement to determine what happened with each case. CDC’s EIS was an important component of the agency’s response. The availability of trained epidemiologists enabled CDC to send numbers of them to each epicenter to provide temporary staff to help investigate the nature and extent of the local incident. CDC reported that because of the number of epicenters and calls for assistance from other localities, its staff, both at headquarters and in the field, were spread thin. The level of assistance provided by CDC depended on the needs of the local public health departments and therefore varied considerably by location. For example, while CDC epidemiologists augmented the staff of some local and state health departments who would have been severely overtaxed without CDC’s help, the agency characterized its role in one epicenter as supplementary to that epicenter’s team of epidemiologists. The Laboratory Response Network proved to be an asset, and some state and local officials told us they were satisfied with the laboratory response during the anthrax incidents. At that time, CDC laboratories, like many of the laboratories in the network, were inundated with samples and operated 24 hours a day to help epidemiologists determine exposure and risk by testing samples to confirm cases. From October 2001 to December 2001, the network laboratories processed more than 120,000 samples for Bacillus anthracis. Public health laboratories other than those at CDC tested 69 percent of these samples, DOD laboratories tested 25 percent, and CDC laboratories tested 6 percent. In addition to testing samples at its laboratories, DOD also assisted the epicenters by providing personnel for laboratories in the epicenters and at CDC and operating portable laboratories to support local investigations. In addition to testing samples, CDC laboratories distributed chemicals needed for testing samples to network laboratories and developed a new testing method that permitted better diagnostics from biopsy samples. CDC used the network to send information to state bioterrorism response coordinators in local and state laboratories. State laboratories also communicated with each other and with CDC by using the network. However, there were signs of strain in the Laboratory Response Network. USAMRIID officials told us that USAMRIID, as well as other military and civilian laboratories, is set up to process clinical samples and was unprepared to process the volume and types of environmental samples that it received. They noted that many of the procedures for obtaining environmental samples from objects, such as keyboards and telephones, had never been standardized. Officials reported that they spent a great deal of time developing and validating these procedures as the incidents unfolded. In addition, DOD laboratory officials told us that they had to process overflow samples from overwhelmed laboratories at CDC and in the epicenters. DOD officials expressed concern about dependence on DOD laboratory resources for civilian emergencies, noting that in wartime DOD’s laboratories are needed to support military operations. The Strategic National Stockpile was also an asset in CDC’s response efforts. The anthrax incidents underscored the benefits of having a system in place to transport antimicrobials and vaccines quickly to areas that need them during emergencies. The Strategic National Stockpile program delivered antimicrobial medications for postexposure prophylaxis and provided for the transportation of anthrax vaccine, clinical and environmental samples, and CDC personnel, including epidemiologists, laboratory scientists, pathologists, and special teams of researchers. Not all of the clinical tools that physicians needed to identify, treat, and prevent anthrax infection were available, and those that were had shortcomings. Clinicians did not suspect and had difficulty promptly diagnosing anthrax because of their inexperience with the disease and because of the nonspecific nature of its presenting symptoms. Cutaneous anthrax can be confused with cellulitis or a spider bite. Inhalational anthrax is difficult to distinguish from other respiratory illnesses, such as pneumonia or influenza. Routine laboratory and radiological testing did not always clearly signal anthrax infection, and, even after physicians did suspect it, the laboratory tests needed to confirm it were time-consuming, laborious, and required that samples be sent to specialized laboratories. Diagnostic tests that are more accurate and can yield results more quickly are in development. Treatment for anthrax infection was available, but it was not effective in almost half of the inhalational cases. Both inhalational and cutaneous anthrax, once diagnosed, were treated with a combination of intravenous antimicrobial medications. All of the patients with cutaneous anthrax recovered, but 5 of the 11 patients with inhalational anthrax did not. The drugs worked by killing the bacteria that develop from anthrax spores following germination of those spores in the body. However, anthrax bacteria produce toxins, and no treatments were available that could destroy these toxins. For this reason, the antimicrobial drugs used to treat inhalational anthrax were ineffective in those patients in whom the bacteria had already produced too much toxin by the time treatment was initiated. CDC is working with other agencies within HHS, such as NIH, and other federal agencies, including DOD, to support the development of new treatments for anthrax and other potential agents of bioterrorism. Methods of prophylaxis for people exposed to anthrax spores were available and apparently effective, but there were several difficulties with these methods. There was uncertainty about how to assess exposure to determine who should be given prophylaxis; initially only one drug had been approved for prophylaxis, and it was approved only for prophylaxis of inhalational anthrax; the optimal length of prophylaxis for those thought to have been exposed to anthrax spores was unknown; prophylactic drugs had to be taken for months and had side effects; and the anthrax vaccine requires more than one dose, had not been approved for postexposure prophylaxis, and was in short supply. Nasal swabs and blood tests were used early in the investigation to assess exposure, but these were not reliable methods. When there was uncertainty about who was exposed or how great their risk from exposure was, prophylaxis was sometimes recommended for all workers in a facility with some contamination, regardless of how close to the contamination the workers had been. This prophylaxis often started with an initial supply of medication while test results were awaited. For example, some people were given a 10-day supply of drugs and asked to return within 10 days to learn whether they needed to continue taking the drugs. Initially, CDC, with advice from NIH, recommended prophylaxis for 60 days. The drugs had side effects, and the rate of compliance with the regimen was typically about 40 percent. Since the incidents, federal agencies have been developing and evaluating tools for detecting anthrax spores. Such tests could enable field workers to make better initial assessments of exposure at particular locations to determine who should get prophylaxis. CDC is working with other federal agencies to support the development of new methods of prophylaxis for anthrax and other potential agents of bioterrorism. HHS reported that at the time of the anthrax incidents no system or data collection instruments existed for monitoring the nearly 10,000 people who were receiving prophylaxis and thus it did not have a way to collect information on the compliance with, adverse events from, or effectiveness of prophylaxis. CDC attempted to collect this information retrospectively, but acknowledged that this method is not optimal. To improve preparedness for future incidents, CDC and FDA have created a post-event surveillance working group that is responsible for developing a system capable of collecting this kind of data. During the anthrax incidents, it became apparent that few clinicians had been trained to recognize anthrax infections. In November 2000, CDC had created a national training plan for bioterrorism preparedness and response. The plan outlined training required to implement the agency’s Bioterrorism Event Response Operational Plan and strategies for training public health and medical professionals in collaboration with partners (chiefly public health organizations and professional groups such as the American Medical Association). At the time of the anthrax incidents, CDC had been implementing the plan for less than a year, and relatively few people had been trained: CDC reports that by October 2001 about 12,000 physicians, nurses, and other medical professionals had completed the programs. However, CDC estimated that during the incidents more than one million medical professionals participated in its anthrax-related training programs via satellite, Web, video, and phone. In addition to CDC’s training programs, which continue to be available, CDC collaborates with professional organizations, such as the American Medical Association and the American Nurses Association, to provide training for their members, and other federal agencies present training programs on bioterrorism (for example, AHRQ) or fund training programs on bioterrorism (for example, the Health Resources and Services Administration). The anthrax incidents of 2001 required an unprecedented public health response. The specific nature of the incidents and the nature of the response varied across the epicenters and other localities across the country. In each epicenter, local officials had to coordinate responses that were a combination of local, state, and federal efforts. In addition, local public health officials in the epicenters were challenged to mount an intensive response that included identifying and treating people already infected with anthrax as well as people who had been exposed and could become infected, identifying contaminated areas and preventing additional people from being exposed, processing thousands of samples suspected of containing anthrax, and responding to thousands of calls from concerned members of their communities. The public health response to the anthrax incidents both demonstrated the benefit of public health preparedness measures already in place or under way at the local, state, and federal levels and emphasized the need to reinforce or expand on those measures. The specific strengths and weaknesses of the public health response identified by local and state public health officials varied. Nonetheless, public health officials from all locations identified general lessons learned for public health preparedness. The lessons identified fall into three general categories: the benefits of planning and experience; the importance of effective communication, both among those involved in the response efforts and with the general public; and the critical importance of a strong public health infrastructure to serve as the foundation from which response efforts can be mounted for bioterrorism or other public health emergencies. CDC was instrumental in supporting local and state efforts throughout the anthrax incidents, for example, by sending epidemic investigators into the field and providing laboratory expertise. DOD resources and expertise were also required to support several epicenters. CDC was challenged with the unfamiliar task of coordinating the extensive federal public health response efforts. Before the incidents began, CDC officials had recognized that the agency was not fully prepared to coordinate a major public health response effort and indeed had identified areas that needed improvement in testimony before Congress on the day before it confirmed the first case of inhalational anthrax in Florida. CDC officials have acknowledged that the agency did not perform as well as it would have liked during the incidents. The agency has taken steps to improve future performance, including creating the Office of Terrorism Preparedness and Emergency Response within the Office of the Director, building and staffing an emergency operations center, enhancing the agency’s communication infrastructure, and developing and maintaining databases of information and expertise on the biological agents the federal government considers most likely to be used in a terrorist attack. We obtained comments on our draft report from DOD and HHS. (See apps. II and III.) DOD highlighted that lessons learned from its support of the public health response could aid in the development of expanded capabilities within the civilian sector to improve the nation’s public health preparedness. DOD emphasized its capabilities that were vital to the success of the public health response, including environmental assessment, transportation of contaminated articles, laboratory testing, and cleanup of contaminated locations. The environmental cleanup was beyond the scope of this report. HHS found the report to be informative and provided additional examples of actions taken to enhance national preparedness for bioterrorism and other public health emergencies. These examples included the establishment of the Office of Public Health Emergency Preparedness; the accelerated acquisition of antimicrobial drugs for the Strategic National Stockpile; and the expansion of basic and targeted research and upgrading of research facilities focused on the pathogens most likely to be used as bioterrorism agents. DOD and HHS also made technical comments, which we incorporated where appropriate. We are sending copies of this report to the Secretary of DOD, the Secretary of HHS, and other interested officials. We will also provide copies to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please call me at (202) 512-7119. Another contact and key contributors are listed in appendix IV. Events Occurring on That Date Events Determined Retrospectively to Have Occurred on That Date (in italics) Terrorist attack on World Trade Center and Pentagon prompts heightened epidemiologic surveillance activities in some areas. In New York (NY), two NBC employees, a New York Post employee, and the child of an ABC employee and in New Jersey (NJ), two U.S. Postal Service (USPS) employees, one from the West Trenton postal facility and one from Hamilton postal facility, seek medical attention for skin conditions. In Florida, an American Media Inc. (AMI) employee is admitted to the hospital with a respiratory condition. The Centers for Disease Control and Prevention (CDC) issues a Health Alert Network (HAN) alert regarding preparedness for bioterrorism, acknowledging the public’s concern about smallpox and anthrax and providing information about preventive measures. In Florida, a second AMI employee is admitted to the hospital, with a diagnosis of meningitis. CDC and the Florida Department of Health announce confirmation of a case of inhalational anthrax. The infected person is an AMI employee, and the cause of the infection is unknown. In Florida, an AMI employee becomes the first anthrax victim to die. In Florida, the AMI building is closed after anthrax spores are found. In Florida, prophylaxis of AMI employees begins. Because the source of the AMI employee’s anthrax exposure is believed to have been a letter, USPS begins nationwide employee education on signs of anthrax exposure and procedures for handling mail to avoid anthrax infection. In NY, the New York City Department of Health (NYCDOH) announces the confirmation of a case of cutaneous anthrax in an NBC employee. USPS says that it will offer gloves and masks to all employees who handle mail. On Capitol Hill, an employee opens a letter addressed to Senator Daschle thought to contain anthrax spores. People thought to be in the vicinity of the letter when it was opened are treated with ciprofloxacin, at the time the only drug approved for postexposure prophylaxis for anthrax. In Florida, CDC confirms a second case of inhalational anthrax in an AMI employee. In NY, NYCDOH announces a second case of cutaneous anthrax, in a child of an ABC employee. In the Washington, D.C., regional area (DC), USPS reports that although it believes that the Daschle letter, which was processed at the Brentwood postal facility, was extremely well sealed and that there was a minute chance that anthrax spores escaped into the facility, it is testing the facility for anthrax contamination; quick tests are negative, other tests are sent to the laboratory. In NJ, laboratory testing confirms cutaneous anthrax in two USPS employees, one from the West Trenton postal facility and one from the Hamilton postal facility. In NY, NYCDOH announces a third case of cutaneous anthrax, in a CBS employee. In Florida, USPS closes two postal facilities contaminated with anthrax spores for cleaning. In a telebriefing, the Director of CDC provides information about anthrax, including risk of exposure, availability of vaccines and antimicrobial medications, screening tests, symptoms, and what to do with suspicious mail and also explains CDC’s role in the investigation. CDC broadcasts part one of a live satellite and Web broadcast on anthrax for clinicians. FDA announces that it has approved doxycycline for postexposure prophylaxis for anthrax. In DC, a USPS employee who works at the Brentwood postal facility seeks medical attention. Events Occurring on That Date Events Determined Retrospectively to Have Occurred on That Date (in italics) In DC, a USPS employee who works at both the Brentwood postal facility and a Maryland postal facility is admitted to a hospital with suspected inhalational anthrax. In NJ, the Hamilton and West Trenton postal facilities are closed, and the New Jersey Department of Health and Senior Services recommends that all USPS employees from both facilities receive prophylaxis. In NJ, laboratory testing confirms cutaneous anthrax in a second USPS employee who works at the Hamilton postal facility. In NY, NYCDOH announces a fourth case of cutaneous anthrax, in a New York Post employee. In DC, a third USPS employee who works at the Brentwood postal facility is admitted to a hospital with a respiratory condition. In DC, the USPS employee who worked at the Brentwood and Maryland postal facilities and was admitted to the hospital on 10/19/01 is confirmed to have inhalational anthrax. In DC, the Brentwood and Maryland postal facilities, are closed. Evaluation and prophylaxis of employees In DC, a USPS employee who worked at the Brentwood postal facility and who initially sought medical attention on 10/18/01 is admitted to a hospital with suspected inhalational anthrax and becomes the second anthrax victim to die. In DC, a fourth USPS employee who worked at the Brentwood postal facility seeks medical attention at a hospital. His chest X-ray is initially determined to be normal, and he is discharged. In DC, the USPS employee who worked at the Brentwood postal facility and who sought medical attention on 10/21/01 and was discharged is admitted to the hospital with suspected inhalational anthrax, and becomes the third anthrax victim to die. In DC, the USPS employee who was admitted to the hospital on 10/20/01 is confirmed to have inhalational anthrax. In DC, prophylaxis is expanded to include all employees and visitors to nonpublic areas at the Brentwood postal facility. CDC rebroadcasts part one of the live satellite and Web broadcast on anthrax for clinicians. In NY, USPS begins giving prophylaxis to employees at six New York City postal facilities where contaminated letters may have been processed. In DC, a State Department mail facility employee is called back to the hospital for admission; test taken the previous day is positive for inhalational anthrax. In NY, NYCDOH announces a fifth case of cutaneous anthrax, in a second NBC employee. CDC initiates daily telebriefings to provide updates on the anthrax incidents. In NY, NYCDOH announces the sixth case of cutaneous anthrax, in a second New York Post employee. In NJ, laboratory testing confirms inhalational anthrax in a USPS Hamilton employee who was admitted to a hospital with suspected inhalational anthrax on 10/19/01. In NY, preliminary tests indicate anthrax in a hospital employee who was admitted with suspected inhalational anthrax on 10/28/01. The hospital where she works is temporarily closed, and NYCDOH recommends prophylaxis for hospital employees and visitors. In NJ, laboratory testing confirms cutaneous anthrax in a woman who receives mail directly from the Hamilton facility. The woman originally sought medical attention on 10/18/01 and was admitted to the hospital on 10/22/01 for a skin condition. In NJ, laboratory testing confirms a second case of inhalational anthrax, in a USPS Hamilton employee who initially sought medical attention on 10/16/01 and was admitted to the hospital on 10/18/01 with a respiratory condition. In NY, the hospital employee becomes the fourth anthrax victim to die. Events Occurring on That Date Events Determined Retrospectively to Have Occurred on That Date (in italics) CDC broadcasts part two of the live satellite and Web broadcast on anthrax for clinicians. In NY, NYCDOH announces the seventh case of cutaneous anthrax, in a third New York Post employee. In Connecticut, an elderly woman, who was admitted to the hospital for dehydration on 11/16/01, becomes the fifth anthrax victim to die. The Connecticut Department of Public Health, in consultation with CDC, begins prophylaxis for USPS employees working in the Seymour and Wallingford postal facilities. CDC expands the options for those on prophylaxis to include extending the duration of drug therapy and adding the anthrax vaccine. As of September 30, 2003, the source of exposure had not been confirmed. In addition to the contact named above, Robert Copeland, Charles Davenport, Donald Keller, Nkeruka Okonmah, and Roseanne Price made key contributions to this report. Infectious Diseases: Gaps Remain in Surveillance Capabilities of State and Local Agencies. GAO-03-1176T. Washington, D.C.: September 24, 2003. Hospital Preparedness: Most Urban Hospitals Have Emergency Plans but Lack Certain Capacities for Bioterrorism Response. GAO-03-924. Washington, D.C.: August 6, 2003. Severe Acute Respiratory Syndrome: Established Infectious Disease Control Measures Helped Contain Spread, but a Large-Scale Resurgence May Pose Challenges. GAO-03-1058T. Washington, D.C.: July 30, 2003. Capitol Hill Anthrax Incident: EPA’s Cleanup Was Successful; Opportunities Exist to Enhance Contract Oversight. GAO-03-686. Washington, D.C.: June 4, 2003. Bioterrorism: Information Technology Strategy Could Strengthen Federal Agencies’ Abilities to Respond to Public Health Emergencies. GAO-03-139. Washington, D.C.: May 30, 2003. U.S. Postal Service: Issues Associated with Anthrax Testing at the Wallingford Facility. GAO-03-787T. Washington, D.C.: May 19, 2003. SARS Outbreak: Improvements to Public Health Capacity Are Needed for Responding to Bioterrorism and Emerging Infectious Diseases. GAO-03- 769T. Washington, D.C.: May 7, 2003. Smallpox Vaccination: Implementation of National Program Faces Challenges. GAO-03-578. Washington, D.C.: April 30, 2003. Infectious Disease Outbreaks: Bioterrorism Preparedness Efforts Have Improved Public Health Response Capacity, but Gaps Remain. GAO-03- 654T. Washington, D.C.: April 9, 2003. Bioterrorism: Preparedness Varied across State and Local Jurisdictions. GAO-03-373. Washington, D.C.: April 7, 2003. U.S. Postal Service: Better Guidance Is Needed to Improve Communication Should Anthrax Contamination Occur in the Future. GAO-03-316. Washington, D.C.: April 7, 2003. Hospital Emergency Departments: Crowded Conditions Vary among Hospitals and Communities. GAO-03-460. Washington, D.C.: March 14, 2003. Homeland Security: New Department Could Improve Coordination, but Transferring Control of Certain Public Health Programs Raises Concerns. GAO-02-954T. Washington, D.C.: July 16, 2002. Homeland Security: New Department Could Improve Biomedical R&D Coordination but May Disrupt Dual-Purpose Efforts. GAO-02-924T. Washington, D.C.: July 9, 2002. Homeland Security: New Department Could Improve Coordination but May Complicate Priority Setting. GAO-02-893T. Washington, D.C.: June 28, 2002. Homeland Security: New Department Could Improve Coordination but May Complicate Public Health Priority Setting. GAO-02-883T. Washington, D.C.: June 25, 2002. Bioterrorism: The Centers for Disease Control and Prevention’s Role in Public Health Protection. GAO-02-235T. Washington, D.C.: November 15, 2001. Bioterrorism: Review of Public Health Preparedness Programs. GAO-02- 149T. Washington, D.C.: October 10, 2001. Bioterrorism: Public Health and Medical Preparedness. GAO-02-141T. Washington, D.C.: October 9, 2001. Bioterrorism: Coordination and Preparedness. GAO-02-129T. Washington, D.C.: October 5, 2001. Bioterrorism: Federal Research and Preparedness Activities. GAO-01- 915. Washington, D.C.: September 28, 2001. West Nile Virus Outbreak: Lessons for Public Health Preparedness. GAO/HEHS-00-180. Washington, D.C.: September 11, 2000. Combating Terrorism: Need for Comprehensive Threat and Risk Assessments of Chemical and Biological Attacks. GAO/NSIAD-99-163. Washington, D.C.: September 14, 1999. Combating Terrorism: Observations on Biological Terrorism and Public Health Initiatives. GAO/T-NSIAD-99-112. Washington, D.C.: March 16, 1999.
In the fall of 2001, letters containing anthrax spores were mailed to news media personnel and congressional officials, leading to the first cases of anthrax infection related to an intentional release of anthrax in the United States. Outbreaks of anthrax infection were concentrated in six locations, or epicenters, in the country. An examination of the public health response to the anthrax incidents provides an important opportunity to apply lessons learned from that experience to enhance the nation's preparedness for bioterrorism. Because of its interest in bioterrorism preparedness, Congress asked GAO to review the public health response to the anthrax incidents. Specifically, GAO determined (1) what was learned from the experience that could help improve public health preparedness at the local and state levels and (2) what was learned that could help improve public health preparedness at the federal level and what steps have been taken to make those improvements. Local and state public health officials in the epicenters of the anthrax incidents identified strengths in their responses as well as areas for improvement. These officials said that although their preexisting planning efforts, exercises, and previous experience in responding to emergencies had helped promote a rapid and coordinated response, problems arose because they had not fully anticipated the extent of coordination needed among responders and they did not have all the necessary agreements in place to put the plans into operation rapidly. Officials also reported that communication among response agencies was generally effective but public health officials had difficulty reaching clinicians to provide them with guidance. In addition, local and state officials reported that the capacity of the public health workforce and clinical laboratories was strained and that their responses would have been difficult to sustain if the incidents had been more extensive. Officials identified three general lessons for public health preparedness: the benefits of planning and experience; the importance of effective communication, both among responders and with the general public; and the importance of a strong public health infrastructure to serve as the foundation for responses to bioterrorism or other public health emergencies. The experience of responding to the anthrax incidents showed aspects of federal preparedness that could be improved. The Centers for Disease Control and Prevention (CDC) was challenged to both meet heavy resource demands from local and state officials and coordinate the federal public health response in the face of the rapidly unfolding incidents. CDC has said that it was effective in its more traditional capacity of supporting local response efforts but was not fully prepared to manage the federal public health response. CDC experienced difficulty in managing the voluminous amount of information coming into the agency and in communicating with public health officials, the media, and the public. In addition to straining CDC's resources, the anthrax incidents highlighted both shortcomings in the clinical tools available for responding to anthrax, such as vaccines and drugs, and a lack of training for clinicians in how to recognize and respond to anthrax. CDC has taken steps to implement some improvements. These include creating the Office of Terrorism Preparedness and Emergency Response within the Office of the Director, creating an emergency operations center, enhancing the agency's communication infrastructure, and developing databases of information and expertise on the biological agents considered likely to be used in a terrorist attack. CDC has also been working with other federal agencies and private organizations to develop better clinical tools and increase training for medical care professionals.
Millions of adolescents in this country work to earn spending money, gain responsibility and independence, help their parents financially, or enhance their educational experience. Although these children work in all different industries, those working in agriculture as migrant or seasonal workers (those constantly on the move to stay employed or those who are only able to find intermittent employment) or whose parents work as migrant and seasonal workers may face economic, social, and educational challenges that distinguish them from children working in other industries. Over the years, commissions, farmworker advocates, and policymakers have commented on the conditions of hired agricultural workers. Although the exact number of workers in agriculture is difficult to estimate, the Commission on Agricultural Workers in 1992 reported that the United States had about 2.5 million hired agricultural workers. Other sources report that the majority of hired agricultural workers work in producing crops, such as fruits and vegetables, and in horticulture. Even though defining agriculture is difficult, it is generally acknowledged to be a high-hazard industry; in 1995, the incidence rate (the number of injuries and illnesses for every 100 workers) for agriculture was 9.7, higher than private industry’s in general (8.1), and third in severity behind manufacturing (11.6) and construction (10.6). Many federal and state agencies are responsible for enforcing laws that protect workers—including children—in agriculture. The Department of Labor’s Wage and Hour Division (WHD) is responsible for enforcing the Fair Labor Standards Act (FLSA), the federal law that establishes child labor and other labor standards (for example, the minimum wage) governing employers engaged in interstate commerce. WHD is also responsible for enforcing the Migrant and Seasonal Agricultural Worker Protection Act (MSPA), which governs housing, transportation, and other work conditions for agricultural workers. In addition, state labor departments are responsible for enforcing their own child labor and other laws that apply to children and others working in agriculture. Labor’s Occupational Safety and Health Administration (OSHA)—along with its state counterparts—is generally responsible for enforcing safety and health standards for workers of all ages in all industries, although in 1997 Labor transferred some of OSHA’s authority over agricultural employers’ provision of temporary housing and field sanitation to WHD. The Environmental Protection Agency (EPA) and state agencies, under the Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA), are responsible for protecting agricultural workers from pesticide exposure.EPA’s Worker Protection Standard, enforced by state agencies under the guidance of EPA, provides for various risk-reduction practices that cover all pesticide handlers and workers involved in cultivating and harvesting crops. This standard requires employers to follow instructions on pesticide labels that specify periods of restricted entry into fields after pesticides have been applied and the use of personal protective equipment by pesticide handlers when applying pesticides or for workers who must enter treated fields before the restricted entry time has expired.Employers must also provide other services, such as basic training on pesticide hazards, information about pesticides that have been applied, and emergency assistance for treating a worker’s illness or injury. All agricultural employers, regardless of the size of their establishment, are required to provide these protective measures to their agricultural workers. Children are not distinguished from other workers. The standard largely excludes others who may be living on the farm premises who are not workers (such as family members of farm owners) or children of hired farmworkers who may be in the fields with their parents while the parents are working. Labor and the Department of Education also oversee billions of dollars in federal aid that helps educationally and economically disadvantaged children—which includes migrant and seasonal children in agriculture. While the Department of Agriculture (USDA) has no enforcement authority over agricultural employers for labor or safety and health laws that affect children or other workers, it does oversee the collection of information about selected farm characteristics such as cultivated acreage and dollar sales. In addition, the National Institute for Occupational Safety and Health (NIOSH), of the Department of Health and Human Services (HHS), conducts independent research on work place safety and health issues. As we and others have noted in the past, federal wage and safety and health protections are typically less stringent for agricultural workers—of all ages—compared with those for workers in other industries and, in general, agricultural workers receive lower hourly wages than workers in many other industries. FLSA exempts small agricultural employers (defined as those employers who did not use more than 500 days of agricultural labor, which equals about seven full-time workers, in any calendar quarter of the preceding calendar year) from paying the minimum wage to their employees. In addition, agricultural employers of all sizes are not required by FLSA to pay their workers overtime. Agricultural employers are also exempt from most safety and health standards enforced by OSHA, and OSHA is prohibited by an appropriations rider from conducting inspections on certain small agricultural employers (those who employ 10 or fewer workers and provide no temporary housing for those workers), even if it receives a complaint about unsatisfactory working conditions from a worker or if a worker is fatally injured. In other industries, an OSHA inspector must respond to a complaint and investigate work place fatalities. Several recent initiatives specifically address conditions affecting children. Executive Order 13045, for example, created a high-level task force composed of the Secretaries of Agriculture, Education, and Labor and the Administrator of EPA, among others. The task force is responsible for recommending actions to the President to reduce risks to children. In addition, in documents prepared in compliance with the Government Performance and Results Act of 1993 (the Results Act), Labor’s WHD introduced a 5-year enforcement effort targeted toward employers producing particular agricultural commodities with an emphasis on detecting violations of child labor law. EPA highlighted as a guiding principle its efforts to identify and assess environmental health risks, such as pesticides, that may affect children disproportionately and pledged to develop six centers to do such research. Finally, Education reported that its goal is to help all children meet challenging academic standards to prepare them for responsible citizenship and further learning—as measured through improved high school attendance and graduation rates—particularly for those students at the greatest risk of school failure, such as children in migrant and seasonal agriculture. In 1998, the President also announced a national Child Labor Initiative to fight abusive child labor and enhance educational opportunities for children working in agriculture as migrant and seasonal workers. In response to this initiative, Labor’s WHD requested an additional $4 million in its fiscal year 1999 budget to increase the enforcement resources dedicated to detecting child labor violations in agriculture. Labor has also requested $5 million to develop a pilot program that would provide educational alternatives for migrant and seasonal agricultural child workers so they would stay in school. In its budget request, Education sought an additional $50 million for its Migrant Education Program (MEP) that would allow it to serve 70,000 to 100,000 more migrant children. We were asked to (1) determine, given the data available, the extent and prevalence of children (defined as anyone under 18) working in agriculture, including their injuries and fatalities; (2) describe and analyze the federal legislative protections and those in selected states for children working in agriculture; (3) assess the enforcement of these laws as they apply to children working in agriculture; and (4) identify federal educational assistance programs and describe how they address the needs of children in migrant and seasonal agriculture, focusing on those aged 14 to 17. On March 20, 1998, we provided preliminary results of this work (see GAO/HEHS-98-112R). We conducted our work in accordance with generally accepted government auditing standards between October 1997 and May 1998. To determine the prevalence of child labor in agriculture and the conditions under which these children work, we obtained and evaluated data from a variety of sources, reviewed the methodologies used to collect these data, and interviewed officials responsible for collecting these data. We explored several data sources, both public and private, to determine an estimate of the number of children employed in agriculture and the hazards they face. We reviewed information and databases from the Departments of Labor, Agriculture, Commerce, HHS, and other government agencies, such as the Consumer Product Safety Commission. For example, the Bureau of Labor Statistics in the Department of Labor is responsible for several main sources of data, including the Current Population Survey (CPS), the Survey of Occupational Injuries and Illnesses, and the Census of Fatal Occupational Injuries. Labor’s Office of the Assistant Secretary for Policy is responsible for the National Agricultural Workers Survey (NAWS). HHS’ NIOSH, a federal agency that conducts independent research on working conditions, sponsors the National Traumatic Occupational Fatalities Surveillance System and the National Electronic Injury Surveillance System, major sources of occupational fatality and injury data. We also reviewed information from private entities, such as the National Safety Council, the Association of Farmworker Opportunity Programs, the National Bureau of Economic Research, the National Farm Medicine Center, and various university studies. Although some of these sources had helpful information, we did not use all of them because of their methodological constraints or coverage limitations. For example, some estimates defined children as anyone younger than 22 years old. In other cases, the methodologies used for developing the estimates were based on so many assumptions that the reliability of the estimates was questionable. We decided to focus on those nationally representative data that provide broad coverage of work experience by age, including CPS, NAWS, Census of Fatal Occupational Injuries, Survey of Occupational Injuries and Illnesses, National Traumatic Occupational Fatalities Surveillance System, and National Electronic Injury Surveillance System. We reviewed previously published data, extracted data from public use files, and obtained special computer runs from the responsible agencies for key data used in this report. We extracted relevant data within the constraints of sample size and privacy considerations. NAWS has been conducted by Labor’s Office of the Assistant Secretary for Policy for about a decade. During that time the survey has evolved, making several major changes in the survey’s subject matter. The primary use of NAWS data is for describing the employment and economic situation of hired farmworkers, and not, according to Labor analysts, for estimating national totals of farmworkers or their dependents. Any estimates of this population must be derived by applying NAWS proportions to independent estimates of total farmworkers such as the estimate developed by the Commission on Agricultural Workers in 1992. We obtained from Labor a preliminary public use file of data from the survey’s inception in 1988 through 1996. Because of privacy considerations, the NAWS public use file did not contain all survey data available; it excluded personal identifiers and other information that could compromise confidentiality. Because the NAWS database has more complete information than we had in the public use file, we also requested several special tabulations of NAWS data from Labor. These tabulations help to complete the picture of the situation of hired farmworkers and their families, but often the data were too sparse to use. Although other relevant variables could be explored from NAWS, in many cases (such as ethnicity of the child or season of work), the subsamples were too small for drawing reliable inferences. For example, in one of its data collection cycles (winter), NAWS collected data from only 72 farmworkers under 18. NAWS has information about hours worked by young farmworkers during the winter data collection cycle from only 65 interviewees. When delineated by ethnicity, no category has as many as 50 cases, the minimum recommended by NAWS analysts as a basis for computations. Such a distinction could be important because foreign-born hired workers make up less than half of all young farmworkers overall but constitute three-quarters of the young farmworkers interviewed during the (combined) fall and winter data collection cycles, and the vast majority of foreign-born hired farmworkers were not enrolled in school. To describe and analyze the legislative protections for children in agriculture at the federal level and in three states—California, Florida, and Vermont—we obtained pertinent laws and reviewed key provisions covering children and others working in agriculture. During on-site interviews with federal and state enforcement officials in Washington, D.C.; California; and Florida and in telephone interviews with Vermont officials, we discussed the coverage of these laws and their application to children and others working in agriculture compared with those working in other industries. We reviewed the legislative history of FLSA, interviewed grower and labor representatives for their perspectives on the treatment of agricultural workers under the law, and discussed potential implications of any changes to the law. We also interviewed growers and their representatives, as well as farmworker advocates, for their views on the extent of child labor used in agriculture. We obtained additional information about protections at the state level for children working in agriculture and assessed how these laws were enforced. We selected three states—California, Florida, and Vermont—to discuss in detail states’ views on child labor in agriculture, what their laws provide, and how particular local conditions and challenges affect the enforcement of state laws. We used several criteria for choosing these states. First, we reviewed state laws to determine which state laws covered children working in agriculture. We omitted those states (such as Texas) in which the laws did not cover children because selecting such a state would not have been useful. We then reviewed USDA data to identify those states ranked highest in the number of hired farmworkers and farms and interviewed farmworker advocates for their opinions on where problems with child labor in agriculture were most severe. Using these criteria, we identified California and Florida as key agricultural states as well as states that many believed faced several challenges in detecting illegal child labor in agriculture. We selected another state—Vermont—to provide a contrast in laws and experiences with those of Florida and California. A large percentage of acreage in Vermont is farmed, and Vermont relies heavily on agriculture but has few reported hired workers. To assess the enforcement of these laws as they apply to children working in agriculture, we obtained and reviewed established policies and procedures for federal and state enforcement agencies for conducting inspections and obtained and reviewed historical enforcement statistics from federal and state agencies responsible for enforcing child labor and other safety and health laws in the agricultural industry. Through interviews with enforcement officials in Washington, D.C.; California; Florida; and Vermont, we identified issues that could affect their ability to detect illegal child labor in agriculture. To identify and describe how federal educational assistance programs address the needs of school-aged (ages 6 through 17) children working in agriculture or whose parents work in agriculture, we conducted a literature review and interviewed education and program officials to understand the academic challenges facing these children. We identified the main federal programs that provide direct assistance to these children and determined the level of program information available about the population served. For the two largest programs serving migrant and seasonal workers aged 14 through 17—Education’s MEP and Labor’s Migrant and Seasonal Farmworker Program (MSFWP)—we obtained and reviewed historical program data; interviewed program officials in Washington, D.C.; California; Florida; and Vermont; and reviewed key program operations to be considered when assessing the type and availability of program data and outcome measures. Although several major sources of data provide nationally representative estimates of the number of children working in agriculture, each has limitations that could result in undercounting. Data are also limited concerning aspects of such children’s working conditions and the frequency of their work-related injuries and illnesses. Nonetheless, available data indicate that children working in agriculture have more severe injuries and a disproportionate share of fatalities compared with children working in other industries. Two nationally representative sources of data on agricultural employment are CPS and NAWS. These surveys use different sampling techniques and cover different groups of workers, but both provide national estimates of children working in agriculture (see table 2.1). Estimates derived from CPS show that, on average, about 155,000 15- to 17-year-olds worked in agriculture in 1997. Most of these workers (about 116,000) were wage and salary workers (that is, hired farmworkers); about 24,000 were self-employed and 15,000 were unpaid family workers. Annual averages between 1992 and 1997 generally showed little change in the overall number of these workers. A second CPS estimate shows that in the past few years, about 300,000 of all 15- to 17-year-olds who worked at some point during the year (hired workers, self-employed, and family members) reported that they held an agricultural job the longest. This estimate comes from a yearly collection of work experience data and is distinguished from the point estimates mentioned above because it represents work experience for an entire year. The number who work at any time during the year is much higher than the number who work in any given week. CPS has limitations that probably underestimate the total number of children working in agriculture. For example, CPS collects labor force information only on individuals 15 and older; it does not collect information on workers 14 years old or younger. In addition, because CPS is a household survey that relies on address lists and for which most of the interviewing is done by telephone, certain groups are harder to interview. These could include migrants, those not living in established residences, those without ready access to telephones, and foreign-born or non-English-speaking individuals—conditions that apply to many farmworkers. The Department of Labor’s NAWS is an agricultural payroll-based survey conducted since 1988. Recent NAWS estimates indicate that, on average, about 128,500 14- to 17-year-old hired farmworkers were working in crop production from 1993 to 1996. These children make up about 7 percent of all hired farmworkers working on crops. Because of the small sample size, NAWS trend data must be interpreted carefully; however, these data show a slight increase in the number of child farmworkers from an earlier period, when about 5 percent of hired crop workers were 14 to 17 years old (about 91,000). About 70 percent of these young farmworkers are male. Moreover, NAWS data indicate that older children are more likely to work than younger children. Farmworkers interviewed for NAWS indicated that while few of their children under age 14 work, about 8 percent of their children aged 14 and 15 work, and 17 percent of their children aged 16 and 17 work, mostly at farm jobs. NAWS data also show a growing proportion of workers between 14 and 17 years old working away from their parents as unaccompanied minors. Recent NAWS estimates show them to total about 3 percent of all hired farmworkers (about 47,000) but more than a third of all 14- to 17-year-old farmworkers. This trend is consistent with the experiences of enforcement officials and farmworker advocates, who noted an increase in young men entering the country illegally without their parents to do agricultural work. Though NAWS collects detailed information about certain agricultural workers, it also has limitations. For example, NAWS focuses solely on hired crop farmworkers; thus, it includes no agricultural workers who are self-employed or unpaid family workers or those hired farmworkers working with livestock. In addition, NAWS interviews only workers 14 years of age and older. Furthermore, NAWS has an extremely complex sampling design and small sample sizes, which may lead to imprecise estimates for some individual variables such as school enrollment or employment levels for different ethnic groups of workers for different data collection cycles. As a result, NAWS data also may underreport the total number of children working in agriculture. Data documenting the hours children work and the kinds of activities they do are limited. Both CPS and NAWS collect some information about the work of children employed in agriculture; nonetheless, this information has the same limitations as the overall employment estimates. Available data, however, show that children work a substantial amount of time and their work is seasonal, physically demanding, and primarily in vegetable crops. CPS data show that about half of young agricultural workers work more than 3 months during the year, and NAWS data indicate that, on average, agricultural workers aged 14 to 17 work about 31 hours per week. Some NAWS data can be separated into three broad ethnicity categories: U.S.-born Hispanics, U.S.-born non-Hispanics, and those born outside of the United States. As a result, NAWS identifies that young foreign-born workers work somewhat longer hours than U.S.-born workers—35 hours compared with 27 hours. Neither CPS nor NAWS, however, provides information about the time of day this work takes place, so determining when these hours were worked (for instance, during school hours, early morning, or evenings) is impossible. CPS data show that children’s work is mainly seasonal, with large increases in employment during the summer months. NAWS data confirm this pattern. NAWS has three data collection cycles during the year: fall, winter, and spring/summer. NAWS data indicate that nearly twice as many young agricultural workers work in the spring/summer cycle as in the fall cycle; few work in the winter. Because the NAWS spring/summer data collection cycle extends from mid-May to the end of July, however, it is an imprecise measure of summer jobs because it includes the end of the school year. These data indicate that children are working during the seasons when school is in session. Some data are available on the general duties children perform, but these data are based on a small number of respondents and only general categories of work. According to NAWS, a substantial portion—about 40 percent—of young agricultural workers aged 14 to 17 work at harvesting tasks, which are generally considered to be some of the most physically demanding in crop work. According to Labor officials, harvesting tasks are activities associated with harvesting the crops, such as bending, stooping, or climbing ladders to pick crops, or carrying buckets of picked crops to transporting vehicles. No nationally representative estimates exist, however, on specific tasks children perform (such as driving tractors) for determining whether children are doing certain tasks before they are legally allowed to do so. NAWS also provides limited data on which crops children work, but these data are also based on a small number of respondents. According to NAWS, about 40 percent of the young agricultural workers work on vegetables and about 20 percent work on fruits and nuts. Agriculture is a hazardous industry, with one of the highest rates of injuries, fatalities, and lost workdays for employees generally. Available data indicate that although the relative number of injuries of children working in agriculture is not as high as that for those working in other industries, the severity tends to be greater and these children have a disproportionate number of fatalities. Although a number of data sources document injuries and illnesses to children working in agriculture, methodological constraints result in estimates that may understate injuries to and fatalities of these children. For 1992 through 1995, BLS data show that between 400 and 600 workers under 18 suffered work-related injuries each year while working in agriculture. In addition, recent estimates from NIOSH show that the estimated injury rate for 14- to 17-year-old workers in agriculture was 4.3 per 100 full-time-equivalent workers—less than the rate of 5.8 for 14- to 17-year-old workers in all industries. Fractures and dislocations, however, were more common in agriculture (14 percent) than in other industries (3 percent), which indicates that agricultural injuries tend to be more severe than those in other industries. Available data show that children working in agriculture account for about 25 percent of all fatalities of children working in all industries. BLS data show that between 1992 and 1996, 59 children under 18 died while working as hired agricultural workers. CPS data, however, show that 15- to 17-year- olds working as hired agricultural workers make up only 4 percent of all 15- to 17-year-old hired workers. BLS data indicate that many of these fatalities involved transportation incidents, often overturned vehicles. In addition, NIOSH reported recently that work-related deaths of children aged 16 and 17 working in agriculture accounted for about 30 percent of all work-related deaths in this age group between 1980 and 1989 (in cases for which industry information was known). Children’s exposure to pesticides also poses serious concerns. EPA, a major source of national data on children’s exposure to pesticides, is required to collect data on occupational and nonoccupational exposure to pesticides. According to EPA, between 1985 and 1992, over 750 cases of occupational exposure occurred involving children under 18, which accounted for about 4 percent of all reported cases. Our review of records from the past several years from the California and Florida pesticide incident monitoring systems from which EPA’s data derive show that 1 percent or less of such exposure involved individuals under 18. These databases are limited, however, and officials agreed that they may not capture all exposure, especially exposure to children. For example, the EPA database neither includes data from all states nor differentiates between exposure occurring on a farm or in some other location. Although the data provide indications of the hazards that agricultural work poses for children, the data are most likely understated because of difficulty relating the injury, illness, or fatality to the work place. First, employers self-report much of the data on occupational injuries, so whether employers always report events accurately is unknown. Accuracy may be especially affected if an injury or fatality involves transient or undocumented workers or if an employer or child is not covered by applicable workers’ compensation, child labor, or safety and health laws.Second, health practitioners may have difficulty determining whether an injury to a young child is occupationally related. This is especially true of chronic injuries or illnesses from sustained exposure to pesticides. Several concerns have been raised about whether health care professionals are adequately trained to recognize the effects of pesticide exposure on children or know the appropriate questions to ask to determine whether the exposure is work related. Third, children commonly work with their hired farmworker parents on evenings or weekends but are not considered to be official employees. As a result, their injuries, illnesses, or fatalities are probably not reflected in available data. Labor and NIOSH are leading efforts to improve our knowledge about farmworkers’ working conditions. The information gathered through these efforts could lay the groundwork for nationwide programs to improve data collection and prevent children’s agricultural injuries. These efforts will improve the overall level of information about farmworkers in general and provide additional information about children’s agricultural injuries. Labor’s proposed fiscal year 1999 budget includes an increase of about $800,000 in funding for NAWS. According to Labor officials, this increase was requested as a part of the President’s Child Labor Initiative and seeks to expand NAWS coverage for all agricultural crop workers, including children. The funding, if provided, would be used to double the sample size from about 2,500 interviews per year to 5,000 and refine the sampling procedure to allow easier computation of confidence intervals. Under this funding, Labor may also undertake other activities specific to obtaining detailed information about children’s work experience—such as expanding the survey to include workers under 14 or including a proportionately greater number of workers under 18 to allow for greater reliability of key data variables. Labor and NIOSH are also implementing an interagency agreement under which NIOSH will provide funding for an expanded survey that will yield additional safety and health data. Several groups have noted the need for a better understanding of the magnitude and scope of children’s agricultural injuries, improved targeted research and prevention efforts, and an assessment of the progress made over time. In the mid-1990s, representatives from a variety of public and private, academic and industrial, medical, and educational organizations formed the National Committee for Childhood Agricultural Injury Prevention. Through consensus, the Committee refined and prioritized recommendations for action. Working with NIOSH, the Committee’s work culminated in a National Action Plan that specified 13 objectives and 43 recommended action steps for meeting those objectives. Among those recommendations was that the Congress designate NIOSH to lead an effort to establish and maintain a national system for preventing children’s agricultural injury. The National Action Plan recommended a systematic approach, including research, education, program interventions, and public policy. Subsequently, the Congress allocated $5 million in 1996 to NIOSH to support an initiative to prevent children’s agricultural injuries. This effort is envisioned as a 5-year initiative with annual funding of $5 million. The NIOSH initiative seeks to address critical data needs, such as surveillance of agriculture-related injuries, health implications of pesticide exposure, and consequences of farm injuries. This initiative will also establish an infrastructure to make better data available for developing and improving prevention efforts and encourage the use of effective prevention strategies by the private and public sectors. As part of the initiative, NIOSH is conducting or supporting research in the following areas: migrant and seasonal worker injury surveillance, risk-factor research, outcomes research, intervention strategies, migrant workers’ health, pesticide exposure in children, ergonomics, farm children’s attitudes and behaviors, and evaluation of safety and health educational programs. These research projects are limited in scope but should improve knowledge about promising strategies and may lead to improved data collection, more effective interventions, and better injury prevention programs nationwide. FLSA and state laws provide less protection for children working in agriculture than they do for children working in other industries; therefore, children may work in agriculture in settings that would be illegal in other industries. Nonetheless, FLSA’s current provisions are more protective now than when the law was first passed 60 years ago and reflect the dynamic changes that are transforming U.S. agriculture and the increased national emphasis on the safety, health, and academic achievement of children. The Congress enacted FLSA in 1938 to provide protections for children and others working in all industries. The need to impose restrictions on child labor in agriculture was recognized by President Roosevelt who sent a message to the Congress urging it to pass legislation to, among other things, protect against “the evil of child labor” in factories and on farms. Nonetheless, 60 years after FLSA was passed, although it covers children working in both agriculture and other industries, children working in agriculture are legally permitted to work at younger ages, in more hazardous occupations, and for longer periods of time than children working in other industries. For example, a 13-year-old may not, under federal law, be employed to perform clerical work in an office but may be employed to pick strawberries in a field. A 16-year-old may not operate a power saw in a shop or a forklift in a warehouse but may operate either on a farm. Finally, under current law, a 14-year-old hired to work in a retail establishment may work only between the hours of 7 a.m. and 7 p.m. (9 p.m. in the summer) and may not work more than 18 hours in a school week or 3 hours in a school day; the same child may work an unlimited number of hours picking grapes as long as he or she is not working during school hours. As shown in table 3.1, in agriculture, children as young as 12 years old may work in any nonhazardous occupation with the parents’ written consent or if working on a farm that employs their parent as long as the work is done outside of school hours. On small farms, children even younger than 12 may work with their parents’ written consent. In nonagricultural industries, the youngest age at which a child may work is 14 (outside of school hours) and, even then, only in specified allowable occupations. In agriculture, children who work on a farm owned by their families may work at any age. In other industries, children under 16 employed by their parents may perform any work as long as it is not in mining or manufacturing and has not been declared hazardous by the Secretary of Labor. As indicated in table 3.1, children as young as 16 may work in agriculture in any capacity, including in some occupations declared hazardous by the Secretary of Labor, such as operating certain tractors, cotton pickers, hay balers, power post drivers, trenchers or other earth-moving equipment, forklifts, or power-driven saws; or driving a bus, truck, or automobile. In nonagricultural industries, children generally may not perform such tasks until age 18. Furthermore, in agriculture, parents do not have to adhere to hazardous occupation requirements, which means a parent may allow his or her 7-year-old to operate a power saw or drive a tractor, although a parent would not be able to allow his or her 7-year-old to operate a similar machine in a nonagricultural setting. Table 3.1 illustrates that a child under 16 may generally work in agriculture for an unlimited number of hours as long as the child is not working during school hours. Conversely, in other industries, a 14- or 15-year-old child may only work for a limited number of hours not only when school is in session, but also when it is not in session. Children who work for their families—in any industry—may work an unlimited number of hours. Thirty-four states have laws that provide some protections for children working in agriculture. State laws play an important role in supplementing FLSA’s protections because they may apply to those employers not covered under FLSA. Moreover, if an employer is covered under FLSA and the state laws, the more stringent provision applies. In other words, in a state with a law with provisions that are more protective than FLSA’s, the state provision would apply. Much like FLSA, however, state laws generally provide less protection to children working in agriculture than to children working in other industries. With the exception of states such as Florida (whose child labor law generally applies equally to children working in all industries), in general, state protections provided to children working in agriculture are less stringent than those for children working in other industries. Sixteen states have no protections at all for children working in agriculture, and over half of the 34 states that do have protections for children working in agriculture allow them to work more hours per day or per week than children in other industries. For example, California allows exemptions for employers operating agricultural packing plants to employ 16- and 17-year- olds during any day when school is not in session for up to 10 hours per day during peak harvest seasons. Although these exemptions are only to be granted if they do not materially affect the safety or welfare of the children and are needed by the employer to prevent undue hardship, California labor officials said, after an initial inspection of the employer, they generally grant all requests for such exemptions. In addition, most of these states allow children in agriculture to work in hazardous occupations at younger ages than children in other industries. Compared with FLSA, about two-thirds of the 34 states allow children to work at about the same ages (12 to 14), although several allow younger children to work in agriculture. For example, Vermont has no lower age limitation for children working in agriculture outside of school hours. More protective than FLSA, most of these states limit the number of hours a child under 16 may work in agriculture. For example, California prohibits a child under 16 from working more than 18 hours a week while school is in session. Florida prohibits a child under 16 from working more than 15 hours a week when school is in session and does not even allow 16- or 17-year-olds to work during school hours as allowed by FLSA and other states. Moreover, in California, all children who wish to work in any industry (including agriculture) must be issued a work permit verifying their age and specifying the hours they are permitted to work. If the number or range of hours on the permit are more stringent than those allowed by California law, then those on the work permit are what the employers are held to. Over 70 percent of these 34 states provide either the same or less protection as FLSA for agricultural child workers regarding the occupations they may perform. Several, however, have more stringent protections. For example, Florida prohibits anyone under 18 from operating or helping operate a tractor of a certain size; any trencher or earth-moving equipment; any harvesting, planting, or plowing machine; or any moving machinery (FLSA allows 16-year-old agricultural workers to operate this type of equipment). California has also instituted what it calls an agricultural “zone of danger” provision that prohibits children under 12 from working or accompanying an employed parent near unprotected water hazards, unprotected chemicals, or moving equipment. FLSA as originally enacted provided few restrictions on the use of child labor in agriculture, probably reflecting the conditions existing in U.S. agriculture at the time. Since FLSA’s original passage, however, the Congress has repeatedly revised FLSA’s protections for children working in agriculture to provide more protections regarding children’s ages, working hours, and types of occupations they may perform. These changes accompanied dynamic changes in the U.S. agricultural industry and increased public concern for children’s safety and education. FLSA as originally enacted only prohibited children from working in agriculture during the hours they were legally required to attend school, although it provided many additional protections for children working in other industries. Several conditions existing at that time may explain why children working in agriculture were treated differently from children working in other industries: Significance of small farm production: When FLSA was passed, small and family farmers formed an important part of the U.S. agricultural industry. Given the industry’s seasonality and instability and the interest in preserving its economic viability, restricting the use of labor, especially child labor, may have placed undue economic and other hardships on these farmers. Benefits to children of agricultural work: When FLSA was enacted, agriculture may have been considered to provide a beneficial work environment for children. In addition, because agriculture had lower levels of mechanization and use of pesticides than it does today and was performed out of doors, it may have provided a safer alternative for children than other industries. In fact, one view expressed at the time was that work on the farm was free from the moral turpitude of city sweatshops and that farm labor taught children valuable lessons and skills. Little national emphasis on academic achievement: Few compulsory education attendance requirements existed during the 1930s, and children were expected to find work at the earliest age possible. Because the use of child labor in many industries was common and accepted, children were likely to stop attending school at 14 or 15 to work or take over the family farm. Since the 1930s, several dynamic changes have taken place not only in the U.S. agricultural industry, but also in the emphasis our nation has placed on children’s health, safety, and academic achievement. During this same period, the Congress has amended FLSA on several occasions and has provided children working in agriculture with additional protections. These changes, which addressed limiting the hours and ages that children may work and the type of work that they may perform, reflected the changing views about the industry and the focus on children’s safety and academic achievement. The legislative changes include the following: Prohibition against work during school hours (1949): FLSA was expanded to prohibit children from working during school hours. Before this, children working in agriculture were only excluded from coverage “while not legally required to attend school.” In other words, children not legally required to attend school could work at any time. Under the 1949 amendment, these children, though not required to attend school under state law, were still prohibited from working as agricultural employees during school hours. Prohibition on work in hazardous occupations (1966): FLSA was expanded to prohibit children under 16 from working in various hazardous agricultural occupations. Before this amendment, children of any age could perform any occupation. This change most likely reflected the growing awareness that the agricultural industry was becoming more mechanized and was increasing the use of pesticides, which posed possibly greater dangers to young children. Prohibition on the employment of young children (1974): FLSA was expanded to prohibit the employment of children under 12 (except if working on the family farm or on a small farm with parental consent). The law also prohibited children working at age 12 or 13 unless a parent consented or employment was on the same farm on which a parent worked. Following are changes in the agricultural industry and in the importance of children’s safety and academic achievement: Decline of the small farmer: The agricultural sector as a percentage of total U.S. economic activity dropped from 27 percent in 1930 to 16 percent in 1990. Meanwhile, the number of farms declined from over 6 million in 1930 to about 2 million in 1992. As the number of farms declined, the relative size of farms (in market value of agricultural products sold in 1982 dollars) increased substantially; the market value of a farm in 1930 was less than $10,000, but by 1992 it was over $80,000. In 1995, 6 percent of farms accounted for almost 60 percent of production. In addition, over half of the hired workforce works on farms employing more than 10 workers.Today, rural communities obtain less of their income from farms, and farmers are leasing out their acreage to large agricultural producers to reduce costs and increase production efficiencies. Enhanced national focus on safety and health of children: During these years, scientific and technological innovations have led to greater use of machinery and pesticides to protect and preserve agricultural commodities. As a result, the country became more aware of and concerned with workers’ health in general and the special needs of children. Pesticides’ effect on children has prompted much study and concern. Researchers continue to identify relationships between health problems and occupational exposure to pesticides or farm work, such as physically demanding farm tasks that hired children do, for example, kneeling or bending for long time periods. Partly because of these dangers, some agricultural producers have policies not to hire anyone under 18. Greater national emphasis on children’s academic achievement: The nation has placed great importance on children’s academic achievement by establishing compulsory education requirements and seeking to improve school attendance rates; graduation rates; and reading, math, and science skills. Educators and policymakers have realized that ensuring a skilled labor force requires better preparing children for an increasingly competitive global marketplace. In addition, researchers have found that children’s working more than 20 hours a week adversely affects their educational achievement; however, according to NAWS, hired agricultural workers aged 14 to 17 work over 30 hours a week when they work. Weaknesses in current enforcement and data collection procedures limit enforcement agencies’ ability to detect all violations of illegal child labor in agriculture. The characteristics of the agricultural industry and its workforce pose several challenges to enforcement agencies for effectively detecting violations of child labor laws. However, resources devoted to agriculture by federal and selected state enforcement agencies have declined in the past 5 years as have the number of cases of detected agricultural child labor violations. In addition, WHD and the states lack procedures necessary for detecting illegal child labor in agriculture, and enforcement agencies are not following established coordination procedures for facilitating detection of illegal child labor in agriculture. Moreover, enforcement databases lack information on children’s involvement in many violations, and data limitations may affect WHD’s ability to assess its progress in reducing illegal child labor in agriculture. The agricultural industry poses several challenges to enforcement agencies because agricultural work is unstable, its work locations are dispersed, and it offers few benefits or little job security. In addition, agricultural workers often have reason to avoid enforcement authorities (see table 4.1). Enforcement authorities must deal with these challenges to effectively detect violations of child labor or other labor or safety and health laws. The number of recorded inspections in agriculture by WHD, OSHA, EPA, and the states in our review has generally declined in the past 5 years, resulting in fewer opportunities to find potential child labor violations. The number of WHD inspections of the agricultural industry declined from about 5,400 in fiscal year 1993 to 3,500 in fiscal year 1997. Although these inspections accounted for about 14 percent of all inspections in the 5-year time period, the percentage of annual inspections devoted to agriculture declined. In the same period, the percentage of direct enforcement hours devoted to enforcement of child labor law by WHD in all industries stayed at about 8 percent; however, in fiscal year 1997, it was less than 6 percent. Although the decline in agricultural inspections must be viewed in light of declines in WHD enforcement resources over the decade and new responsibilities assigned to WHD, the decline in agricultural inspections was greater than the relative decline in the number of inspectors, for example. Inspections devoted to agriculture declined similarly in California, a state we reviewed. In addition, Florida and Vermont, the other states we reviewed, did not track agricultural inspections or devoted relatively few resources to agriculture in the past 5 years. OSHA and EPA, which are responsible for enforcing safety and health laws and regulations for agricultural workers, have also devoted declining resources to agriculture in the past 5 years. Although these agencies have no responsibility for detecting child labor violations, farmworker advocates have said the presence—or absence—of other enforcement agencies in agriculture affects the number of violations of all labor laws, including child labor. In addition, because enforcement agencies have established procedures calling for referrals of potential violations of respective laws, OSHA or its state counterpart if detecting a potential child labor violation during one of its agricultural inspections could refer the violation to WHD or the state enforcement agency. In the past 5 years, OSHA and its state counterparts conducted less than 3 percent of all their inspections in agriculture, and, while the total number of inspections OSHA conducted in all industries declined by almost 11 percent, the number conducted in agriculture declined almost by half. States, with guidance and funding from EPA, have also reduced the number of inspections conducted in agriculture—from about 11,000 in fiscal year 1993 to 7,000 in fiscal year 1997, accounting for about 15 percent of the federally funded inspections by states during this period. According to WHD officials, one of the first things inspectors do in every inspection is determine whether children are present, which means that WHD looks for violations of child labor law in every inspection. WHD and state enforcement agencies, according to our review, however, detected few cases of child labor violations in the past 5 years, and the number of cases has generally declined during this period. In fact, WHD detected agricultural child labor violations in less than 1 percent of all agricultural inspections conducted between fiscal years 1993 and 1997. Recently, the Secretary of Labor said that it was difficult to know whether the decline in the number of recorded child labor violations was due to WHD’s reduced enforcement activity or a reflection of actual conditions. As shown in table 4.2, WHD detected agricultural child labor violations in only 14 cases (involving 22 children) in fiscal year 1997 under FLSA, which was a decline from the 54 cases (involving 146 children) in fiscal year 1993. Texas had the most WHD cases (44), followed by New Mexico (24), Florida (14), California (12), and Georgia (11). The other states had six or fewer cases each. Most violations involved children too young to work, and about 40 percent of the violations involved children working in vegetable commodities (such as onions, tomatoes, and peppers); another 30 percent involved children working in grain production; 10 percent involved children working in berry planting or harvesting. Similar to the federal experience, states in our review also reported declining cases of child labor violations in agriculture (California) or reported few or no violations (Florida and Vermont). ( For more information on these states’ experiences with child labor in agriculture, see app. I.) In addition, according to WHD officials, when they target enforcement activities to detect violations of child labor laws, they see little evidence of violations; the work these children are doing is often within the confines of FLSA. If FLSA’s nonagricultural protections for child labor were applied to agriculture, the number of violations found would increase, WHD officials said. Nonetheless, as a result of WHD’s first few salad bowl enforcement activities in Texas, New Mexico, and Louisiana, WHD found about 40 children working illegally in the fields, more than found in all of fiscal year 1997. The special challenges presented by the agricultural industry, its dangerous nature as evinced by high injury and fatality rates for its child workers, and its relatively larger number of child workers compared with other industries indicate an important role for enforcement agencies for detecting violations. The limited resources enforcement agencies have devoted to agricultural inspections in general and the decline in such resources in the past few years mean that inspectors must be as efficient as possible when in the field if they are to detect illegal child labor in agriculture. In addition, documented procedures must provide clear guidance to inspectors so they know what to do to detect violations. WHD and the states we reviewed, however, lack documented procedures for use during agricultural inspections to determine whether a child is too young to be working or whether a child is, in fact, working—key conditions required for demonstrating that a violation has occurred. In addition, even though WHD has established coordination procedures with other federal and state enforcement agencies for conducting joint inspections, referring potential child labor cases to appropriate agencies, and exchanging information to facilitate enforcement efforts, these procedures are not being followed routinely. Finally, it is not clear whether the criteria WHD uses for determining where and when to conduct inspections reflect the potential presence of children. The two key conditions required for inspectors to document child labor violations are the child is (1) underage and (2) works for the employer. Although federal and selected state enforcement agencies have general procedures that inspectors must follow for all inspections to detect and document child labor violations, these procedures do not account for the special conditions facing labor law enforcement in agriculture and therefore may be insufficient to detect illegal agricultural child labor. WHD’s Field Operations Handbook specifies policies and procedures for inspectors to follow for all inspections. For documenting child labor violations under FLSA, the handbook requires inspectors to independently verify a child’s age through a birth certificate, passport, or some other valid document to determine if the child is old enough to be working or performing a certain task. The states in our review have similar requirements. Both federal and state enforcement officials said, however, that the lack of this kind of documentation or the use of fraudulent documentation is common for children working in agriculture. In many cases, inspectors cannot find adequate documentation to independently verify a child’s age. Neither WHD nor the states we reviewed had documented procedures for instructing inspectors in handling this situation, although WHD officials said they had verbally conveyed to inspectors the importance of conducting other activities (such as interviewing workers or teachers) to independently verify a child’s age. Given the constrained and declining resources allocated to agriculture, inspectors may not be able to perform these additional activities, especially since they are not specified in official agency documentation. In these cases, inspectors would not be able to cite an employer for an FLSA child labor violation. If an employer does not have a child’s age on file as required by FLSA, an inspector may cite the employer for a record-keeping violation, which carries a maximum initial civil monetary penalty of $275. Enforcement action may end at that point if the inspector cannot independently verify the child’s age. The lack of documented procedures for additional activities for verifying the child’s age suggests that at least, in some cases, inspectors would detect a record-keeping violation rather than a child labor violation. The second condition required for inspectors to document a child labor violation is that the child is working. Children working under their parents’ payroll number or helping out before or after school and on weekends are common work arrangements in this industry. Therefore, documents such as payroll records may not reflect children’s work. These are the types of records, however, that enforcement guidance requires inspectors to examine for initially determining whether children are working. Neither WHD nor these states have issued formal documented procedures for instructing inspectors in situations in which they sense children are working at the work site such as interviewing workers off site. WHD officials said they have trained inspectors and issued informal guidance in the past to inspectors about what activities to perform during agricultural inspections to address these problems. Our review of this guidance failed to identify any such specific instruction to inspectors for detecting illegal child labor or actions inspectors should take when available information fails to identify a child’s work history. WHD officials said, however, that some inspectors have used videotapes to document children working, and, under the salad bowl initiative, inspectors have used still photographs to document children working. Without documented, official procedures, however, and given the scarce resources allocated to agriculture and the low incidence of detected child labor violations, little assurance exists that all inspectors are taking photographs, interviewing workers, or doing other activities necessary for systematically and consistently documenting violations. Inspectors may not be detecting violations because procedures WHD has established for facilitating coordination with other federal and state enforcement agencies are not always being followed. The patchwork of laws, many federal and state agencies involved, limited resources each agency has devoted to agriculture, and characteristics of this industry make coordination and cooperation vital for detecting illegal child labor in agriculture. WHD has acknowledged the role of coordination in helping to identify child labor and other violations by establishing agreements with the state labor enforcement agencies, OSHA and its state counterparts, and the Department of Justice’s Immigration and Naturalization Service, among others. These agreements establish an understanding that cooperative efforts are to be taken to ensure that the employment conditions of agricultural workers, including children, fully comply with federal and state statutes. Among other things, the agreements call for (1) referring complaints of or suspected violations of applicable statutes, when appropriate, to the agency with jurisdiction; (2) conducting joint investigations of employers when appropriate; or (3) exchanging records and information, including information on which employers have been cited or subject to remedial or punitive sanctions. According to our interviews with federal and state officials and a review of available data, agencies’ actions fell short of the agreements’ requirements, and, in many cases, no controls were in place to alert WHD that procedures were not being followed. Enforcement officials we interviewed generally could not recall any specific cases they had referred or that had been referred to them regarding illegal child labor in agriculture. In addition, databases maintained by WHD and other enforcement agencies collect little information on referrals given or received, although at least one agreement specifies that agencies establish systems to monitor and track referrals. Moreover, several officials told us that if they were not legally responsible for looking for children during agricultural inspections, they would probably not even recognize a potential child labor case so they would most likely not refer it. Furthermore, enforcement officials also disagreed about who has jurisdiction for particular cases or for certain employers. WHD officials could not identify whether a procedure existed for determining how to handle case referrals, and at least one state enforcement agency told us that WHD hesitated to take such referrals. At least some of the agreements between WHD and state labor agencies call for joint inspections to present a unified front to employers and take advantage of the varying strengths of federal and state laws. Other than as a part of California’s Targeted Industries Partnership Program (TIPP),however, officials believe few, if any, joint inspections have been conducted by WHD and the states. In addition, databases do not consistently identify whether inspections are conducted jointly. Even TIPP is not as unified an effort as it used to be, according to California’s state Commissioner of Labor. TIPP calls for federal and state agencies to develop their inspection agendas together and provide staff from federal and state agencies for all inspections. What actually happens, however, is that each agency develops its inspection agenda for the year and agrees to do joint inspections when possible. A comparison of the number of TIPP inspections conducted (from California’s database) with the number of inspections conducted by California’s WHD (from WHD’s database) highlighted fewer inspections performed by WHD than California, even though both agencies are supposed to be involved in all TIPP inspections. A general lack of communication and exchange of information exists among WHD and OSHA, EPA, and selected state enforcement agencies. Even in TIPP, which is a key example of federal-state cooperation, individuals from both federal and state enforcement agencies involved told us of difficulties exchanging information and coordinating enforcement agendas. In addition, neither California nor Florida labor officials had been involved in any of WHD’s decisionmaking about which employers to target or when or how to conduct the inspections under the salad bowl initiative. In June 1998, WHD held a half-day stakeholders meeting in Washington, D.C., to discuss enforcement priorities for the next several years. According to several state representatives present, although this event was a positive step toward enhancing communication with the states, it appeared that WHD had already decided on its priorities because little time was allotted for state input and feedback. Inspections may not be conducted where and when children are most likely to be working, possibly resulting in the detection of fewer child labor violations. According to WHD officials, WHD targets its agricultural inspections to employers with a history of low-wage payments, those who use imported workers, or those with excessive injury rates but not necessarily to those who are suspected of employing children. In addition, WHD officials acknowledged that finding children requires inspectors to be in the fields early in the morning or on weekends, but it is not clear how many of WHD’s agricultural inspections in the past 5 years have been conducted at those times. Because staffing decisions are made by local WHD offices, it is also unknown whether WHD’s staff, which is bilingual (over 25 percent, according to WHD), is involved in agricultural inspections to help communicate with workers. For example, although the salad bowl initiative has a major emphasis on identifying illegal child labor, these commodities were not chosen because of the likely presence of children; and the commodities covered under this initiative (cucumbers, lettuce, onions, garlic, and tomatoes) may not be the only ones on which children work. WHD officials said the criteria for selecting the employers and work places that produce these commodities were the low-wage nature of the work, compliance history of these types of employers, and widespread production of these commodities; therefore, many WHD local offices could be involved in the initiative. However, employers in cucumbers do typically employ families, according to reports officials have heard. Prior enforcement data indicate that children also work on commodities, such as peppers and grains, and farmworker advocates said that children also probably work in berries. Therefore, the salad bowl inspections may not be targeting the major employers of child workers. In addition, officials did not know the number of WHD’s bilingual inspectors involved in the salad bowl inspections, so whether inspectors will be able to talk to workers to identify potential violations is unclear. Because the enforcement databases used by WHD and other enforcement agencies do not provide information on children’s involvement in particular violations, the extent to which children are involved in other labor law violations in agriculture is underreported. FLSA requires employers to have the age of their child workers on file. If employers do not do this, they may be cited for a record-keeping violation under FLSA—the only record-keeping violation under FLSA that has a civil monetary penalty. The information recorded on the violation would not, however, identify this as a labor violation. Because of the characteristics of this industry and its workforce and the lack of documented procedures for inspectors in conducting additional activities to independently verify the children’s age, inspectors sometimes use this particular provision to cite employers who cannot be cited for child labor violations because of the missing records. WHD, however, cannot identify the number of record-keeping violations involving children each year. In addition, WHD’s penalty database (a separate financial database that tracks all penalties assessed and collected) does not identify specific violations; therefore, WHD cannot determine the amount of penalties assessed or collected resulting from record-keeping violations involving children. The lack of information about FLSA violations involving children indicates that WHD is underestimating the amount of child labor activities that violate FLSA. Children’s involvement in violations of laws other than FLSA’s child labor provisions is unknown because enforcement agencies, such as WHD, OSHA, and EPA, have traditionally had a narrow view of what constitutes illegal child labor. WHD is the only federal enforcement agency required to collect any age information on individuals involved in violations but only does so when inspectors believe a potential child labor violation may exist. Because other agencies do not have to identify child labor violations, they do not collect age information on individuals involved in violations. These practices may obscure violations of other labor or safety and health laws involving children. If these agencies’ data systems could identify the extent to which violations of these other laws involve children, enforcement efforts could be targeted to those employers or areas that systematically exploit working children. Citations issued by WHD to employers for not paying their workers the minimum wage may involve a substantial number of children. Minimum-wage violations occur when employer records reflect wages paid only to the parent when the work was actually performed not only by the parent, but also by a child. In addition, unaccompanied minors (which NAWS reported made up a significant portion of hired farmworkers between 14 and 17 years of age) may be especially susceptible to being paid less than the minimum wage because of their youth and lack of adult supervision and protection. NAWS reported that about 8 percent of 14- to 17-year-old hired farmworkers reported they do not receive the minimum wage. Although WHD found over 350 minimum-wage violations for agricultural employers in fiscal year 1997, WHD has no data on the number of citations, if any, issued for minimum-wage violations that involved children. If such data were available, it would not only better reflect children’s involvement in all violations, but it might also reveal the extent to which agricultural employers may purposely exploit families and their children in this way or systematically prey on younger workers. In addition, other types of labor law violations most likely involve children. Last year, WHD found more than 900 MSPA violations (MSPA requires employers to provide promised wages, adequate housing conditions, and safe transportation). OSHA found over 175 violations of employers not providing hired farmworkers adequate housing conditions, but the extent to which the violations involve children under 18 is unknown. NAWS reported that over a quarter of farmworkers had children living with them and about a tenth of farmworkers interviewed said that at least occasionally they took children 5 years of age or younger to the fields with them when they worked. Moreover, NAWS data show that about 9 percent of young agricultural workers help to apply or otherwise work with pesticides, but they are less likely to have been trained in pesticide handling than older workers. Finally, EPA rarely collects specific information on the type of violations under the use provision of the Worker Protection Standard, although state agencies may collect such data; children’s involvement in these violations, however, is not captured. The Secretary of Labor has stated that reducing illegal child labor in agriculture is a major agency priority, and, under the Results Act, WHD has committed to a 5-year effort to reduce such labor. WHD collects a considerable amount of information on its enforcement activities regarding child labor. It has several different databases in operation; however, several inconsistencies, omissions, and other limitations in these databases may affect the usefulness of the data for program oversight. WHD has committed, through the Results Act, to developing new databases by the year 2002, but it is unclear whether these new databases will address these limitations. For example, data from WHD’s enforcement database were internally inconsistent or were not comparable with other databases, which affects their usefulness for program evaluation. Data for historic yearly inspections conducted, violations detected, and children involved changed in different data runs provided by WHD. Officials said this was to be expected because the system is updated continuously. In another instance, a case that WHD headquarters data showed to be a violation was not a violation according to the local WHD office that investigated the case. Large discrepancies also existed between the historical number of inspections conducted by WHD in California and those recorded by California for TIPP. For example, 286 WHD inspections were conducted in agriculture in fiscal year 1997 compared with 455 TIPP inspections in calendar year 1997, even though WHD and California labor inspectors should be involved in all TIPP inspections. Neither federal nor state officials could sufficiently explain these discrepancies. Also, despite the amount of data collected by WHD, it has been unable to determine which child labor violations resulted in civil monetary penalties. According to WHD officials, its financial database (which tracks the civil monetary penalties assessed on employers for violations of FLSA, MSPA, and other laws) is not comparable with its enforcement database. If the databases were comparable, WHD could determine which violations have resulted in which penalties. WHD officials said the only way they could determine penalties for individual violations would be to manually review individual case files. This inability to compare and disaggregate types of violations with penalties is related to an issue that surfaced in 1997—that agricultural employers were being assessed lower penalties than employers in other industries for similar child labor violations. The social and economic disadvantages experienced by many children in migrant and seasonal agriculture place them at great risk of academic failure. Although both Education and Labor administer programs that target children with educational and economic disadvantages, the extent to which children involved in migrant and seasonal agricultural work participate in, or are helped by, these programs is generally unknown. Except for the programs that target only migrant and seasonal farmworkers and their children, Education and Labor’s program information does not classify participants by occupational status. Even those few programs that target children working in agriculture or children whose parents work in migrant and seasonal agriculture have limited data. Of these programs, Education’s MEP and Labor’s MSFWP are the largest ones providing services to youths in the critical ages of 14 to 17, the ages when students are most likely to drop out of school. In the case of MEP, decentralization and flexibility complicate the collection of data needed to measure results. In the case of MSFWP, the program’s focus on adults’ employment needs discourages the collection or analysis of information on children. Poverty, limited English ability, and rural and social isolation place children in migrant and seasonal agricultural work—like any other group of children affected by these social conditions—at considerable risk of academic failure. For these children, however, the difficulties associated with these social conditions are compounded by mobility and other conditions of agricultural work that result in school enrollment rates and high school completion rates among the lowest in the nation. For example, according to one source, 45 percent of migrant youths had dropped out of school, entering the full-time workforce without the credentials and skills needed to compete for any but the lowest paying jobs. Migrant children in agriculture experience two types of mobility that compound the other social problems they face. The first type concerns moves from one geographical area to another. For low-income children, particularly those who are migrant workers or children of migrant workers, schooling is frequently interrupted and school days are lost because of moves among school districts and states. Migrant children move, on average, 1.2 times a year. Such moves not only disrupt schooling, but also often prevent the development of social and community ties that can facilitate school attendance and educational achievement. The second type of mobility concerns movement in and out of schools. Economic pressures drive many of these children, particularly those aged 14 and older, from the schools into the fields. According to some estimates, between 169,000 and 200,000 youths were working in agricultural migrant jobs, and, of this group, almost half were living independently; that is, their families were not with them. Although girls are less likely than boys to work in agriculture, girls’ schooling can nonetheless also be interrupted because they must care for other family members. Working long hours can also negatively affect the academic performance of young farmworkers. To the extent that children are working instead of attending school, they cannot benefit from school-based programs or interventions. Even if children are attending school, working too much can interfere with their learning. Research findings indicate that working more than 20 hours a week during the school year can negatively affect student achievement to a significant degree. NAWS data show, however, that many children in agriculture work 35 hours a week or more. Although some of these work hours might be during the summer, peak demand periods for agricultural work also take place during the fall and spring when the school year begins and ends. Children in agriculture are less likely to be graduated from high school and to attend school than are other groups of youngsters, although estimates vary. For example, estimates of dropout rates ranging from 45 to 90 percent have been cited for migrant and seasonal agricultural workers. In comparison, data from the 1990 Decennial Census indicate that dropout rates of 16- to 24-year-old individuals ranged from 10.3 percent for non-Hispanic white youths to 30.5 percent for Hispanic youths. For low-income Hispanics, the rate increased to 35.5 percent, far lower than the rates reported for migrant youths, although these groups have common ethnic and income characteristics. Although school attendance is a problem for all children in agriculture, it is particularly so for children of foreign-born agricultural workers. According to NAWS data, the children of farmworkers born in the United States are twice as likely as the children of foreign-born farmworkers to be enrolled in U.S. schools in the prior 12 months. Seventy-six percent of children of U.S.-born non-Hispanic farmworkers were enrolled in school compared with 34 percent of children of foreign-born farmworkers. For children who are farmworkers, school attendance rates are even more problematic. According to NAWS, about 68 percent of 14- to 17-year-old farmworkers born in the United States were enrolled in school when they were interviewed compared with 16 percent of farmworkers in this age group born outside the United States. The years between ages 14 to 17 are critical educational years because during these years youths are most likely to drop out of school. For many of these youths, the pressures to leave school may be particularly great. Beginning around age 14, these youths become legally and competitively employable for farm work, which allows them to supplement their family’s income, and, in some cases, to try to achieve economic independence. Meanwhile, some of these youths, particularly those who are older and overaged for their grade, may associate school experiences with failure and the opportunity for them to be graduated might appear remote. For example, according to NAWS, more than 90 percent of farmworkers’ children aged 13 or younger who were in school were “on grade level,” that is, had completed a grade appropriate to their age, but this measure dropped to about 80 percent for 14- to 16-year-old students and to 71 percent for 17-year-old students. Education and Labor administer many programs that target educationally and economically disadvantaged children and youth. Tables 5.1 and 5.2 list major programs that provide direct educational services of about $14 billion to millions of educationally and economically disadvantaged youths of all ages, including those we defined as school aged (6 to 17). Children and youths in agriculture are eligible for services from all these programs on the same basis as others—except for those programs that specifically target children in agriculture. Program data that describe the participation of children in migrant and seasonal agriculture is only available from programs that target only these groups such as MEP, the Migrant High School Equivalency Program, and MSFWP. Although Education does not collect program data on the participation of migrant children in the programs that do not specifically target migrant children, data from a national program evaluation suggest that migrant children may not receive services from many of these programs to the extent that they are eligible. A 1992 assessment found that only about half of the migrant students receiving services from MEP who were also eligible for services funded through the title I program received title I services. In addition, about 9 percent of the students receiving assistance through MEP participated in the federal bilingual education program (although about 84 percent come from families that speak little or no English), and only 5 percent of children receiving assistance from MEP were in special education (compared with about 10 percent of all school-aged children). Interviews with Education officials indicated that migrant children may continue to be underrepresented in programs that do not specifically target these children. Although programs administered by Labor collect extensive information on participants, data from these programs, except for MSFWP, do not classify participants according to whether they are in migrant or seasonal agriculture. For example, data collected by the Job Training Partnership Act’s (JTPA) title II-C program for economically disadvantaged youths classifies participants on their status regarding 13 employability barriers, including substance abuse, lack of significant work history, and homelessness, and on 19 other characteristics, including family composition, reading skill level, and preprogram hourly wage but does not classify individuals who are migrant or seasonal agricultural farmworkers. In contrast to services from Education’s title I and special education programs that are available through almost every school district nationwide, accessibility issues limit the potential of Labor programs to help youths in migrant and seasonal agriculture. First, to establish eligibility, youths must have records that document their citizenship or work authorization, work experience, and family income level. Second, the number of individuals who want to participate in these programs far exceeds the number that can be served. For example, funds for JTPA title II-C are available to serve only 5 to 10 percent of the eligible population. Third, distance and transportation costs may prevent these youths from participating because services are not available in all farm communities. Finally, the design of some programs limits their ability to serve agricultural workers. For example, JTPA title II-B, the Summer Youth Employment and Training Program, offers only summer services, which is when youths are most likely to be working in agriculture. Although we do not know the extent to which migrant and seasonal farmworker youths participate in job training programs that do not specifically target them, only 8 percent of the individuals under age 22 who terminated from MSFWP in 1996 participated concurrently in any other federally funded training program while they were receiving services from MSFWP. In comparison, 19 percent of out-of-school youths served by the JTPA II-C program received concurrent services from another federal program. Of the key educational and labor programs that serve disadvantaged youths, MEP and MSFWP are the largest that target youths in migrant and seasonal agriculture. These programs, therefore, have the most potential to provide educational opportunities to these youths between the ages of 14 and 17 who may be at the greatest risk of educational failure. Program operations and associated data limitations preclude, however, measuring program results for these youths. MEP, the largest Education program that targets these youths, is a federal assistance program administered largely by the states. Within broad federal guidelines, state educational agencies (SEA) determine how funds will be redistributed to the local educational agencies (LEA) and other eligible entities and, in cooperation with LEAs, decide how MEP funds can best be used to support state and local programs to help migrant children. This decentralization and flexibility limit the ability to evaluate MEP on a national level because program goals and activities vary by site. In addition, because MEP is an assistance program, its effects cannot be separated from those of the much larger state and local efforts that it supports. The Improving America’s Schools Act of 1994, which reauthorized MEP, requires recipients of MEP funds to give priority for services to eligible children who are failing, or at risk of failing, to meet the state’s educational standards and to those who are currently migrant and whose education has been disrupted during the regular school year. Under these priorities, however, the parameters of state and local decisionmaking are broad. States and localities determine if funds will be used for regular term or summer term activities, which grade levels will be served, which instructional or supportive activities will be provided and by what type of service provider, and whether funds will be used to provide separate services or combined with other funds to support common activities. As a result of this flexibility, the probability that a child will be identified as eligible for MEP services and the type of services that will be received depend largely on where the child attends school because SEAs and school districts differ in the emphasis placed on recruiting, the age group recruited, and the services provided. Typically, active outreach takes place only in areas that have received MEP funds. Some outreach efforts are aimed at particular age groups, such as preschool children or out-of- school youths. In general, national program data from recent years indicate that proportionately more children in grades 1 to 6 are served than those in other grades. Also, although the percentage of preschool participants has increased, the percentage of secondary school participants has actually decreased. MEP’s effects on educational outcomes are difficult to measure because MEP funds are used in combination with funds from state, local, and other federal programs to achieve common educational goals. The relatively small size of MEP’s contribution adds to this difficulty. Estimated resources available from MEP, an average of $400 per participant, constitute only a small fraction of the resources that a participant typically receives through state and local school programs. Because of this, MEP funds are generally used to provide educational activities that improve children’s achievements in regular classroom activities. Usually, these activities consist of academic tutoring; before- and after-school programs; professional development activities for educators; and supplemental services such as health, social service, outreach, and coordination services. If schools meet certain criteria, they may combine MEP funds with other federal, state, and local sources to support activities that aim to improve the learning of all children in the school, including those who are not educationally disadvantaged. In school year 1995-96, 1,541 schools used MEP funds to support schoolwide efforts. No program data are available to assess state compliance with MEP’s legislatively mandated service priorities or to measure MEP results. Education collects a considerable amount of information from the states on MEP participation, staffing, and services, and has for many years, but these data cannot be used to measure program accomplishments or states’ progress in meeting national service objectives. Although the Improving America’s Schools Act requires states to give priority in the MEP program to serving students who are achieving at low levels and who have moved within the academic term, Education collects no data on either of these characteristics. The lack of achievement data is understandable because of the decentralized nature of U.S. education.Collecting data on mobility, however, is feasible and would provide information to determine the extent to which states are following national priorities. Before 1993, Education routinely collected counts of MEP participants according to their status as actively or formerly migrant. As we and others have reported, this distinction is important because MEP has historically served more children classified as formerly migrant than children classified as currently migrant, suggesting that some children may have gotten priority because they were not mobile and therefore easier to serve. Education has begun several initiatives designed to obtain information on the academic achievement of migrant children and to increase information on the use of MEP funds. Such information will most likely facilitate program assessment in future years. By the year 2000, Education plans to collect data from each state on the results of assessments of student proficiency in meeting state standards. Results will be reported in a way that allows data to be disaggregated for various student populations, including migrant students. Thus, outcome information on the academic level of all migrant children will be available, and changes in proficiency can be measured over time. Education’s Planning and Evaluation Service is completing a congressionally mandated study of the relationship between schoolwide programming, in which funds from several programs are combined to improve services for all students and for migrant children. This study will address questions about possible changes in the levels of services provided migrant children as a result of schoolwide programming. In addition, Education plans to use its standard data collection systems for collecting additional information on migrant children. This information will include counts and descriptions of services received by migrant students from funding sources other than MEP; the types of activities undertaken by school districts; information on record transfer practices; and district-level counts of migrant children in summer and regular term programs. Education also now publishes individual state profiles as part of the state title I Migrant Participation Report, providing policymakers and practitioners with state-by-state descriptions of services provided by MEP. MSFWP is a federally administered employment and training program for individuals aged 14 and older that has traditionally focused on the employment needs of adults. Reflecting this focus, MSFWP does not report information for participants aged 14 to 17 or track program results separately for this age group. Although many believe that MSFWP can play a key role in improving the employability of these youths, the focus on adults constrains the resources available for youths and the attention given to them. Consequently, Labor has neither established nor encouraged service delivery standards or outcome measures for youths under 18. Unlike other JTPA programs that serve either children or adults, MSFWP has the broad mission of serving both youths and adults. It is a job training program designed to help migrant and seasonal agricultural workers aged 14 and older obtain or retain upgraded agricultural employment or nonagricultural employment. It also aims to provide educational and support services to farmworkers and their families that contribute to their occupational development, upward mobility, and economic self- sufficiency. The program is administered through discretionary grants awarded by Labor to 52 grantees who are held accountable for complying with many uniform federal regulations and meeting national performance outcome standards. The program’s procedures, operations, and outcome measures primarily reflect the employment needs of adult participants, and Labor has not developed separate requirements, guidelines, or outcome measures to gauge its effectiveness in serving youths. For example, youths must meet the same eligibility requirements as adults, even though such requirements may deter some youths from participating. This can be a particular deterrence for unaccompanied youths because they often have no receipts to document earnings or records to verify family earnings. In addition, grantees are held accountable for meeting or exceeding two national program outcome measures, regardless of the age mix of participants served. These outcome measures—placement rates and the average wage at placement—reflect employment rather than educational goals. Education-related outcomes, such as returning to full-time school, completing high school or entering other training, are also reported for participants by grantees, but these outcomes are not used to measure program results. These employment-related outcome measures may be more appropriate for adults than youths. Although adult and some youth participants may want jobs, academic instruction and work-related behavioral skills outcome measures may be more appropriate for most youths because their work experience is probably more limited than adults’ and the job market they will face in the future will probably be more competitive. This may be the case for out-of-school youths especially, who will probably need additional education and skills to find long-term productive employment. MSFWP resources are not adequate to serve all eligible adults, much less all eligible youths, and the lack of resources might have a larger impact on youths’ program participation than adults’. Program officials told us that MSFWP often operates as a “triage program” because resources are not adequate to serve all who are eligible. Although officials could not tell us the number of eligible individuals who were denied services or their characteristics because this information is not collected, they agreed that the demand for MSFWP services exceeds available resources. Expenditure constraints apply to all grantees, regardless of the ages of individuals served. Consequently, decisions might be made favoring the participation of more employable individuals, which might exclude those under 18. In addition to inadequate MSFWP funds, the availability of external funding limits program participation. Although grantees may receive funds from other federal, state, or local programs, almost all rely heavily on MSFWP funds. A 1994 MSFWP evaluation found that, from a sample of 18 grantees, 6 had limited in-kind or no resources other than MSFWP funds and an additional 8 grantees received external funding that made up 15 percent or less of their total budget. Four, however, received substantial external resources that made up 50 percent or more of their budget. According to most estimates, however, the need for services has increased in the last few years, while resources have not. For example, grantees in California told us that the availability of external funds, including funds from Education, has declined, while the funding level of MSFWP declined from 1995 to 1996 but has remained level since. Program officials said that limited resources seriously affected their ability to serve youths because they hesitated to divert funds from adults to youths, who might be better served in schools. They indicated that priority for services is given to adults who have less access to alternative programs and who may be more likely to benefit from the services than youths. Officials mentioned that a program set-aside, such as the one that exists for Native American youths, would allow them to provide services to youths that would not detract from the services they offer to adults. Educators, farmworker advocates, and others believe that MSFWP provides essential services for youths, particularly out-of-school youths, because program services are geographically accessible to agricultural workers and the program recognizes the special educational and economic needs of these youths (for example, basic education instruction and evening classes). The grantees we visited in California and Florida said MSFWP can play an important role in furthering the educational achievement of these youths. Although grantees collect age data on each participant, Labor collects and reports participation and outcome data in only three age groups—all participants under 22, a category that combines the experiences of school-aged youths with young adults (those aged 22 to 44) and those aged 45 and over. Labor does not disaggregate data for individuals aged 14 to 17. Without national program information for the 14- to 17-year-old group, Labor and other groups cannot assess program involvement or outcomes for youths alone. The lack of national program or outcome data for this population may not reflect current MSFWP operations. MSFWP is authorized to serve youths as young as 14 and is serving many youths or young adults. Although no national data are available to show the number of program participants aged 14 to 17, the data that are available show that 3,657—over 30 percent of program terminees in 1996—were 21 years old or younger. Moreover, data from one grantee we visited showed that 17 percent of MSFWP participants were under 18, and, of these, 31 percent were under 16. Evidence also indicates that youths may require different training and supportive service experiences than adults. At the MSFWP sites we visited, trainers and administrators told us that young participants stayed in the program longer and required more expensive services than older participants. Others believed that teenage participants required combinations of services that were hard to provide under current guidelines. Because the needs of youths might differ from those of adults, participation and outcome data breakdowns for ages 14 to 17 may be necessary to verify whether the program is helping youths and to identify services that are most likely to affect youths positively. Such information would also be useful for determining whether a special allotment of funds to be set aside to serve only youths would be an appropriate way to provide services to these youths. Labor officials stated that this information may be helpful but that approval from the Office of Management and Budget would be required to alter the type of data Labor collects from grantees. In response to the President’s Child Labor Initiative, Labor, in its fiscal year 1999 budget has requested an additional $5 million from the Congress to support a pilot and demonstration project for 14- to 18-year-old dependents of migrant agricultural workers. This project aims to develop innovative strategies to decrease child labor in agriculture through economic and educational incentives, including subsidized nonagricultural employment and individualized educational opportunities, additional to those provided by the child’s assigned school, that provide credit for graduation. Because this program will most likely target youths with characteristics like those of young MSFWP participants, an analysis of MSFWP data on participation, services, and outcomes for youths in this age range could demonstrate what combinations of job training activities and supportive services have been most associated with positive educational and employment outcomes. Sixty years after the passage of FLSA, questions about the conditions of children employed as migrant and seasonal agricultural workers continue to surface. Although basic, reliable data on the number of children working in agriculture, their duties, and the consequences to their health and safety are limited, available data indicate that these children tend to be at greater risk of serious injury and death than those employed in other industries. In addition, children hired to work in agriculture receive less protection under the law than children who work in other industries. Furthermore, weaknesses in enforcement and data collection procedures indicate that violations of child labor law may not be detected, or the violations reported may not accurately reflect the extent to which children are employed illegally. Moreover, although Labor and Education administer many programs that target educationally and economically disadvantaged children generally, we know little about whether those programs are helping children in migrant and seasonal agriculture overcome the serious educational challenges they face. Several changes could improve Labor’s detection of illegal child labor in agriculture and thus improve the protection of these children’s health, safety, and educational opportunities. The procedures WHD currently has for identifying a child’s age and employment history do not account for potentially fraudulent or missing age documentation or ambiguous employment relationships, which are common to this industry. As a result, WHD inspectors probably miss potential violations of illegal child labor in agriculture. National guidance for inspectors that specifies what activities they should conduct to address these conditions would enable WHD to detect more violations. In addition, WHD and other enforcement agencies are not taking advantage of the procedures established to facilitate enforcement, such as referring potential cases, conducting joint inspections, or exchanging information. This situation results in confusion and lost opportunities for detecting potential violations. If WHD followed these procedures and, as required in some cases, ensured that systems provided information to determine whether such procedures were being followed, it would also bolster detection of illegal child labor in agriculture by more efficiently using resources. Labor has an excellent opportunity to improve its processes with its salad bowl enforcement initiative. The issuance of documented procedures that should be followed for adequately identifying children’s ages and employment would ensure that inspectors act in a systematic, consistent way to detect illegal child labor. In addition, because a specific number of these inspections are to be conducted, WHD should be able to work with other federal and state labor agencies to conduct joint inspections, exchange information, and determine the best way to make sure these procedures are followed on an agencywide basis. We realize that establishing and following such procedures may affect the level of resources allocated to agriculture and child labor, resulting in a possible decrease in enforcement activity in other areas. Such tradeoffs, however, are inherent in establishing enforcement priorities, and Labor has already established the reduction of illegal child labor in agriculture as a key enforcement priority. The Secretary established this priority, which is indicated by the salad bowl enforcement initiative and by Labor’s fiscal year 1999 requested budget increase to enhance enforcement in agriculture. In that respect, Labor already plans to increase its allocation of enforcement resources to agriculture and child labor; improved guidance to inspectors and emphasis on coordination would ensure more efficient use of those resources. WHD’s reporting of violations involving children could also be improved. Methods used by WHD and others to collect data on enforcement actions understate the extent to which children are involved in the hundreds of other labor law violations, such as record-keeping and minimum-wage violations, that are detected each year. The lack of such data masks the true extent of labor law violations involving children. WHD’s inability to identify the number of FLSA child labor record-keeping violations is mainly a data problem because WHD’s data system does not identify any record-keeping violations, even though FLSA child labor record-keeping violations have a civil monetary penalty. WHD needs to establish a way to identify the number of child labor FLSA record-keeping violations detected each year to provide more complete information about the types of FLSA child labor violations as well as better reflect the level and type of child labor violations detected by WHD. Regarding the identification of other labor law violations involving children, WHD already looks for child labor in every inspection it conducts, and, according to WHD officials, inspectors will try to identify the ages of children on site and their conditions of work. Because these procedures are already in place, it would appear that for minimum wage and other labor laws under WHD’s authority, WHD inspectors may be able to obtain age information. This kind of information would help Labor and policymakers better understand the extent to which labor law violations involve children. Such information, for example, could help evaluate the validity of the view held by many that some agricultural employers systematically target children to pay them less than the minimum wage or pay a family of workers wages that do not reflect the entire family’s work. The availability of such information could also help WHD identify regulatory or enforcement actions to correct this problem. We recognize the collection of this information may cause additional work for inspectors and may result in other unanticipated difficulties. For that reason, we believe this data collection effort could be tested during salad bowl inspections. During these inspections, WHD inspectors could obtain information on violations involving any individuals under 18 to determine what resources would be needed in collecting such age information. After WHD has tested this procedure and determined the results of these activities, Labor could determine whether it would be worthwhile to collect such information for all agricultural inspections. Labor could also assess the impact of MSFWP on the educational opportunities of children in migrant and seasonal agriculture aged 14 to 17. Although the program serves children as young as 14, program administrators have traditionally focused on adults’ employment needs, which has affected the program’s ability and desire to serve children. Yet, the program may have a special role for serving children aged 14 to 17—especially those who may not be in school, who may already be working, or who cannot be served by traditional education-related programs—and, in fact, this program serves a substantial number of children and young adults. Local programs maintain data on participation, service provision, and outcomes for children in migrant and seasonal agriculture in this age range, but Labor collapses the data into broad age groupings (such as ages 14 to 22) when it collects the data. If Labor developed and analyzed information on youths aged 14 to 17, it would help resolve the disagreement about the program’s role in serving this age group within its broad mandate of serving both youths and adults. If the data indicate that this program plays an important role in providing services to children, it will help decisionmakers determine the most appropriate program orientation for children and adults. Changes in enforcement and data collection procedures will improve the detection and reporting of illegal child labor in agriculture; however, our review indicates that protections provided by FLSA to children working as hired migrant and seasonal agricultural workers in today’s modern agricultural environment may not be adequate. In addition, these protections may be inconsistent with the increased emphasis on the safety, health, and academic achievement of children. The rise in dominance of large agricultural producers and the associated decline in the number of small and family farms has created a new type of child labor on U.S. farms. These children or their parents work in agriculture on a migrant or seasonal basis, and, unlike in the past, most are not related through family or community ties with their employers. In addition, many young agricultural workers live independently of their families. Growing reliance on mechanization and pesticides has increased the safety and health hazards associated with agricultural work. Current laws allow children to work in agriculture at younger ages, for longer hours, and in more dangerous occupations than children working in other industries. As we have reported, children working in agriculture are more likely to have severe work-related injuries and work-related deaths than children working in other industries. Furthermore, they are less likely to be enrolled in school and less likely to be graduated from high school than other children. Given the changing character of the agriculture industry, the allowable working conditions for child agricultural workers may be contributing to the health, safety, and education problems that these children face. Considering the evolutionary changes that are transforming the agricultural industry and the increased emphasis on the safety, health, and academic achievement of children, the Congress may wish to formally reevaluate whether FLSA adequately protects children who are hired to work as migrant and seasonal farmworkers. To improve Labor’s detection and reporting of illegal child labor in agriculture, we recommend that the Secretary of Labor direct the Assistant Secretary of Employment Standards to take the following actions: issue national enforcement procedures specifying the actions WHD inspectors should take during agricultural inspections when documentation for verifying a child’s age is missing or potentially fraudulent or when existing documentation does not reflect a child’s possible employment; take steps to ensure that procedures specified in the existing agreements among WHD and other federal and state agencies—especially regarding referrals to and from other agencies, joint inspections, and exchange of information—are being followed and, as required in some agreements, are being recorded and tracked; develop a method for identifying the number of record-keeping violations resulting from employers not having children’s ages on file as required by FLSA; and test the feasibility of collecting data on the number of minimum-wage and other labor law violations that involve individuals under 18. We also recommend that the Secretary of Labor direct the Assistant Secretary of the Employment and Training Administration to develop and analyze data on MSFWP services and outcomes for children aged 14 to 17 to determine the number of these children served, the services provided, and the outcomes experienced by these children. We provided copies of this report to USDA, EPA, the Departments of Labor and Education, and the states included in our review for comment. EPA, Education, and the states provided technical comments to improve the clarity and accuracy of the report, which were incorporated as appropriate. USDA concurred with our recommendations (see app. II). In its response, Labor concurred with the intent of our recommendation to issue national enforcement guidance specifying the actions WHD inspectors should take during agricultural inspections to verify a child’s age or employment status (see app. III). Labor has, in fact, provided additional guidance on this matter on the regional level in at least two regions, and the Department said it will determine if additional guidance is needed. We believe this recently issued guidance includes the additional procedures necessary to better detect illegal child labor in agriculture. At this time, however, the guidance has only been distributed to particular WHD local offices. Although this represents a positive first step toward implementing our recommendation, we still believe that this guidance needs to be issued to all WHD inspectors so they can systematically and consistently take these actions to adequately detect illegal child labor in agriculture. Labor also concurred with our recommendation aimed at ensuring that coordination procedures specified in existing agreements with federal and state agencies are followed, recorded, and tracked. It said that WHD does have specific procedures for responding to and issuing case referrals and is now streamlining this process. As we reported, however, whether these procedures are followed is not always evident. Ideally, in streamlining these procedures and implementing this recommendation, WHD will focus on documenting adherence to these procedures to preclude the communication problems we detected among WHD and other agencies. Regarding our recommendations to develop a method for identifying the number of FLSA child labor record-keeping violations and to test the feasibility of collecting data on children’s involvement in other violations, Labor acknowledged that such data may be beneficial but identified cost and the practicality of collecting such information as major issues requiring consideration. We agree that these are important issues, but given the Results Act environment that seeks to encourage data-driven measurable goals and objectives, the emphasis WHD has placed on detecting illegal agricultural child labor and WHD’s efforts to revise its databases to better reflect enforcement activities and outcomes, we still believe that collecting this information—even on a limited basis—would enhance the agency’s efforts to protect children from exploitation in the work place. In addition, the lack of data contributes to the general lack of information about the nature, magnitude, and dynamics of illegal child labor in the United States. Only WHD, as an enforcement agency tasked with protecting children, can collect these kind of data. Although NAWS may be useful for understanding some aspects of the child labor problem, its self-reporting nature and sampling limitations make it less appropriate for grasping issues concerning illegal employment of children. Labor did not directly comment on our recommendation to develop and analyze data on MSFWP services and outcomes for children aged 14 to 17 to determine the number of these children served, services provided, and outcomes achieved by these children. Labor said, however, that this information is included in the aggregated data collected on all participants aged 14 to 22. We recognize this, and, in fact, the inability to isolate information on children aged 14 to 17 is the main reason why we are making this recommendation. By combining the experiences of youths with adults, Labor cannot analyze the services provided to participants under 18. Labor also raised several issues related to our characterization of WHD’s enforcement efforts. For example, it disagreed with our observation that the decline in enforcement resources devoted to agriculture resulted in fewer opportunities to find potential child labor violations. Instead, Labor asserted that no direct correlation exists between the decline in resources devoted to agricultural inspections and WHD’s ability to detect potential child labor violations. Although we agree that detecting illegal child labor is not solely determined by the number of inspections conducted, we know from experience that when WHD targets particular commodities or employers with additional inspection resources, it has found a substantially larger number of violations—as evinced by the ongoing salad bowl initiative and past child labor targeting efforts. Furthermore, Labor highlighted the additional resources it has requested for fiscal year 1999 to better detect illegal child labor in agriculture, which indicates Labor’s belief that increased resources are important to detecting illegal child labor. Labor also provided technical comments, which were incorporated as appropriate.
Pursuant to a congressional request, GAO reviewed: (1) the extent and prevalence of children working in agriculture, including their injuries and fatalities; (2) the federal legislative protections and those in selected states for children working in agriculture; (3) the enforcement of these laws as they apply to children working in agriculture; and (4) federal educational assistance programs and how they address the needs of children in migrant and seasonal agriculture, focusing on those aged 14 to 17. GAO noted that: (1) according to one nationally representative estimate, about 116,000 15- to 17-year-olds worked as hired agricultural workers in 1997; (2) this estimate may undercount the number of children employed in agriculture because of methodological limitations in making the estimates; (3) of all children working in agriculture, between 400 and 600 suffer work-related injuries each year; (4) between 1992 and 1996, 59 children lost their lives while working in agriculture; (5) changes to the Fair Labor Standards Act (FLSA) have resulted in more protection for children working in agriculture than when the law was first passed; (6) nevertheless, FLSA and state laws provide less protection for children working in agriculture than for children working in other industries; (7) consequently, children may work in agriculture under circumstances that would be illegal in other industries; (8) weaknesses in current enforcement and data collection procedures limit the Department of Labor's Wage and Hour Division's (WHD) ability to detect violations involving children working in agriculture; (9) enforcement activities devoted to agriculture have declined in the past 5 years, as has the number of detected cases of agricultural child labor violations; (10) WHD has not established the procedures necessary for documenting whether children are working in agriculture in violation of child labor laws, nor has it routinely followed established procedures for facilitating enforcement coordination for better detecting illegal child labor in agriculture; (11) WHD's enforcement database does not identify all child labor-related violations under FLSA, nor can WHD and other enforcement agencies identify the extent to which children are involved in other types of labor law violations; (12) the Departments of Education and Labor have many programs to improve educational opportunities for disadvantaged school-aged children; however, few of these programs specifically target migrant and seasonal agricultural child workers or children of such workers, and most collect no information on the number of such children served; and (13) even for the two largest programs that target some or all of this population, program operations and subsequent data limitations impede a national evaluation of these programs' results for this target population.
The TWIC program was established in response to several pieces of legislation and subsequent programming decisions. In November 2001, the Aviation and Transportation Security Act (ATSA) was enacted, which included a provision that requires TSA to work with airport operators to strengthen access controls to secure areas, and to consider using biometric access control systems, or similar technologies, to verify the identity of individuals who seek to enter a secure airport area. In response to ATSA, TSA established the TWIC program in December 2001. In November 2002, MTSA required the Secretary of Homeland Security to issue a maritime worker identification card that uses biometrics to control access to secure areas of maritime transportation facilities and vessels. TSA and Coast Guard decided to implement TWIC initially in the maritime domain. In addition, the Security and Accountability For Every (SAFE) Port Act of 2006 amended MTSA to direct the Secretary of Homeland Security to, among other things, implement the TWIC pilot project. Appendix II summarizes a number of key activities in the implementation of the TWIC program. In August 2006, DHS officials decided, based on significant industry comment, to implement TWIC through two separate regulations, or rules, the first of which directs the use of the TWIC as an identification credential. The card reader rule, currently under development, is expected to address how the access control technologies, such as biometric card readers, are to be used for confirming the identity of the TWIC holder against the biometric information on the TWIC. On March 27, 2009, the Coast Guard issued an ANPRM for the card reader rule. From fiscal year 2002 through 2009, the TWIC program had funding authority totaling $286.9 million. Through fiscal year 2009, $111.5 million in appropriated funds, including reprogramming and adjustments, has been provided to TWIC (see table 1). An additional $151.8 million in funding was authorized in fiscal years 2008 and 2009 through the collection of TWIC enrollment fees by TSA, and $23.6 million had been made available to pilot participants from the Federal Emergency Management Agency (FEMA) grant programs—the Port Security Grant Program and the Transit Security Grant Program. In addition, industry has spent approximately $179.9 million to purchase 1,358,066 TWICs as of September 24, 2009. The TWIC program includes several key components: Enrollment: Transportation workers are enrolled by providing biographic information, such as name, date of birth, and address, and then photographed and fingerprinted at enrollment centers. Background checks: TSA conducts background checks on each worker to ensure that individuals who enroll do not pose a known security threat. First, TSA conducts a security threat assessment that may include, for example, checks of terrorism databases or watch lists, such as TSA’s no-fly list. Second, a Federal Bureau of Investigation criminal history records check is conducted to determine whether the worker has any disqualifying criminal offenses. Third, the worker’s immigration status and prior determinations related to mental capacity are checked. Workers are to have the opportunity to appeal negative results of the threat assessment or request a waiver in certain circumstances. TWIC production: After TSA determines that a worker has passed the background check, the worker’s information is provided to a federal card production facility where the TWIC is personalized with the worker’s information and sent to the appropriate enrollment center for activation and issuance for each individual applicant. Card activation and issuance: A worker is informed when his or her TWIC is ready and must return to an enrollment center to select a personal identification number (PIN) and obtain and activate his or her card. Once a TWIC has been activated and issued, the worker may present his or her TWIC to security officials when they seek to enter a secure area, and in the future may use biometric card readers to verify identity. Once the card is issued, it is presented at MTSA-regulated facilities and vessels in order to obtain access to secured areas of these entities. Current regulation requires that the card at a minimum be presented for visual inspection. In response to our 2006 recommendation and a SAFE Port Act requirement, TSA initiated a pilot in August 2008 known as the TWIC reader pilot, to test TWIC-related access control technologies. This pilot is intended to test the technology, business processes, and operational impacts of deploying TWIC readers at secure areas of the marine transportation system. As such, the pilot is expected to test the viability of selected biometric card readers for use in reading TWICs within the maritime environment. It is also to test the technical aspects of connecting TWIC readers to access control systems. After the pilot has concluded, the results of the pilot are expected to inform the development of the card reader rule requiring the deployment of TWIC readers for use in controlling access at MTSA-regulated vessels and facilities. Based on the August 2008 pilot initiation date, the card reader rule is to be issued no later than 24 months from the initiation of the pilot, or by August 2010, and a report on the findings of the pilot 4 months prior, or by April 2010. To conduct the TWIC reader pilot, during the course of our review TSA was partnering with the maritime industry at four ports as well as three vessel operations that are receiving federal grant money for TWIC implementation. The participating grantee pilot sites include the ports of Los Angeles, California; Long Beach, California; Brownsville, Texas; and the port authority of New York and New Jersey. In addition, vessel operation participants include the Staten Island Ferry in Staten Island, New York; Magnolia Marine Transports in Vicksburg, Mississippi; and Watermark Cruises in Annapolis, Maryland. Of these seven grant recipients, the four port grant recipients, with input from TSA and Coast Guard, have identified locations at the port where the pilot is to be conducted, such as public berths, facilities, and vessels. The TWIC reader pilot, as initially planned, was to consist of three sequential assessments, with the results of each assessment intended to inform the subsequent ones. Table 2 below highlights key aspects of the three assessments. To address possible time constraints related to using the results of the TWIC pilot to inform the card reader rule, two key changes were made to the pilot test in 2008. First, TSA and Coast Guard inserted a round of testing called the Initial Capability Evaluation (ICE) as the first step of the ITT. The intent of the ICE was to conduct an initial evaluation of readers and determine each reader’s ability to read a TWIC. Initiated in August 2008, the ICE testing resulted in a list of biometric card readers from which pilot participants can select a reader for use in the pilot rather than waiting for the entire ITT to be completed. Further, the ICE list has been used by TSA and Coast Guard to help select a limited number of readers for full functional and environmental testing. Second, TSA is no longer requiring the TWIC reader pilot to be conducted in the sequence highlighted in table 2. Pilot sites may conduct early operational assessment and system test and evaluation testing while the initial technical testing is still under way. Currently, ITT testing by TSA is underway and pilot sites are concurrently executing Early Operational Assessment (EOA) testing in varying degrees. Because of the concurrent test approach, some pilot sites may complete ST&E testing while ITT testing remains under way. TSA, the Coast Guard, and the maritime industry took several steps to meet the compliance date and address implementation related challenges in an effort to avoid negatively impacting the flow of commerce, but experienced challenges in enrolling transportation workers and activating their TWIC cards. Planning for potential information technology system failures could have helped address one challenge by minimizing the effect of a system failure that affected TSA enrollment and activation efforts. TSA reported enrolling 1,121,461 workers in the TWIC program, or over 93 percent of the estimated 1.2 million users, as of the April 15, 2009, deadline. Although no major disruptions to port facilities or commerce occurred, TSA data shows that some workers experienced delays in receiving TWICs. TSA began enrolling maritime workers in the TWIC program in October 2007 through their network of enrollment centers which grew to 149 centers by September 2008. In September 2008 we reported that TSA had taken steps to confront the challenge of enrolling and issuing TWICs in a timely manner to a significantly larger population of workers than was originally anticipated. For example, according to TSA officials, the TWIC enrollment systems were tested to ensure that they would work effectively and be able to handle the full capacity of enrollments during implementation. To address issues with the TWIC help desk, such as calls being abandoned and longer-than-expected call wait times, TWIC program management reported that it worked with its contractor to add additional resources at the help desk to meet call volume demand. Similarly, to counter the lack of access or parking at enrollment centers at the Port of Los Angeles, TSA’s contractor opened an additional enrollment facility with truck parking access as well as extended operating hours. In addition, TSA reported that it conducted a contingency analysis in coordination with the Coast Guard to better identify the size of its target enrollee population at major ports. For example, in preparation for meeting enrollment demands at the Port of Houston, TWIC program officials updated prior estimates of maritime workers requiring TWICs for access to this port’s facilities. Lastly, TSA embarked in a series of communication efforts designed to help inform and educate transportation workers about TWIC requirements and encourage compliance with TWIC. TSA’s TWIC communications plan outlines a series of efforts, such as the use of fliers, Web media, and targeted presentations, to inform transportation workers and MTSA-regulated facility/vessel operators. According to TSA officials, the resulting communication efforts contributed to the high number of TWIC enrollments and activations by the April 15, 2009, national compliance date. Based on lessons learned from its early experiences with enrollment and activation, TSA and its contractor took steps to prepare for a surge in TWIC enrollments and activations as local compliance dates approached. For example, as identified in TWIC program documentation and by port facility representatives, TSA and its contractor increased enrollment center resources, such as increasing the number of trusted agents, enrollment stations, and activation stations as needed to meet projected TWIC user demands. TSA and its contractor also utilized mobile enrollment centers and employed more flexible hours at enrollment centers in order to accommodate TWIC applicants’ needs. For example, at two of the nation’s largest ports, the Ports of Los Angeles and Long Beach, TSA and its contractor opened a facility dedicated entirely to TWIC activations in addition to providing additional trusted agents and extending hours of operation at enrollment centers. As a result of these efforts, TSA reported enrolling 1,121,461 workers in the TWIC program, or over 93 percent of the estimated 1.2 million users, by the April 15, 2009, deadline. On this date, the total number of TWIC cards activated and issued reached 906,956, short of the 1,121,461 million enrollees by 214,505 individuals, or 19 percent. According to TSA officials, TWICs were available for 129,090, or approximately 60 percent of these individuals, but had not been picked up by the individual and activated. See figure 1 below for details. Although no nationwide problem occurred due to TWIC implementation, surges of activity occurred that challenged TWIC enrollment and activation efforts at some locations. For example, at the Port of Baltimore, Coast Guard and port officials stated that, despite multiple communications with TSA about instituting a self-imposed early compliance date, TSA and its contractors were not prepared to handle the increased enrollment demand brought on by the early compliance. As a result, the local fire marshal visited the enrollment center when the number of enrollees exceeded the capacity of the center. In response, TSA and its contractor enhanced its enrollment center operations in Baltimore—opening an additional enrollment center at a nearby hotel on the same day—to adapt to the surge in enrollment and activation. In another case, representatives of the New York maritime industry reported that the wait time for employees to receive their TWIC cards following enrollment rose from 6 days to between 6 and 9 weeks as the March 23, 2009, local compliance date approached for Captain of the Port Zone New York. TWIC users in New York also reported difficulty accessing records in the online TWIC database designed as a means for facility operators to verify enrollment in order to grant interim access to employees who had enrolled in the TWIC program but who had not yet received their cards. Furthermore, according to Port of Brownsville and local Coast Guard officials, the lack of resources at the Brownsville enrollment center led to long lines at the center once the local compliance date neared. Additionally, the approach used to notify TWIC applicants that their TWICs were ready for pick-up also proved problematic for Mexican workers. Port of Brownsville officials noted that in many cases these workers have no e-mail and, since many are Mexican citizens, most hold a cell phone with an international phone number (from Mexico). As a result, according to Port of Brownsville officials, many of these enrollees were not adequately notified that their TWIC cards had arrived and were ready for pick-up and activation. In addition, thousands of TWIC enrollees experienced delays in receiving their TWICs for varying reasons. According to TSA officials and contractor reports, reasons for delayed TWIC issuance included, among others, TSA’s inability to locate enrollment records, problems with information on the TWIC cards, such as photo quality, problems with the quality of the manufactured blank cards, and incomplete applicant information required to complete the security threat assessment. Further, TWIC enrollees also experienced delays in obtaining a TWIC because they were initially determined to not be qualified for a TWIC. According to TSA records, as of July 23, 2009, almost 59,000 TWIC applicants received initial disqualification letters and over 30,000 of these applicants appealed the decision questioning the basis for the initial disqualification decision. Under TSA implementing regulations, an applicant may appeal an initial determination of threat assessment if the applicant is asserting that he or she meets the standards for the security threat assessment for which he or she is applying. Almost 25,000 (approximately 42 percent of those receiving initial disqualification letters) of the appeals resulted in an approval upon subsequent review, which suggests that some of these delays could have been avoided if additional or corrected data had been available and reviewed during the original application process. In addition, about 2,300 of the over 4,800 applicants who requested waivers from the TWIC disqualifying factors were granted them upon subsequent review. Advocacy groups, such as the National Employment Law Project (Law Project), have reported that hundreds of individuals experienced delays in receiving their TWICs and that individuals have been unable to work as a result of processing delays at TSA. The Law Project has identified at least 485 transportation workers as of June 2009 who requested assistance from it in requesting appeals or waivers from TSA following an initial determination of disqualifying offenses based on TSA’s threat assessment. According to officials at the Law Project, for the TWIC applications on which they provided assistance and approvals were granted, it took an average of 213 days between the applicant’s enrollment date and final approval for a TWIC. Furthermore, Law Project officials noted that applicants they assisted were out of work for an average of 69 days while waiting for TWIC approval after their port passed the TWIC compliance date. However, TSA could not confirm the figures presented by the Law Project officials because TSA does not track this information in the same format. For example, if a person is sent a disqualification letter and does not respond within 60 days, TSA’s system does not continue to track the enrollee’s file as an open enrollment waiting to be filled. Rather, TSA closes the file and considers the person to not have passed the threat assessment. According to agency officials, when an applicant contacts TSA after the 60-day period passes, TSA routinely reopens their case, though not required to do so, and handles the application until its conclusion. These types of cases often take time to resolve. Similarly, for those situations in which enrollees assert that they never received a disqualification letter and include it as part of the wait time accounted for, TSA’s numbers will differ as well because, according to TSA officials, they have no way to track whether or not enrollees receive these letters. Finally, a power failure on October 21, 2008, occurred at the TWIC data center at Annapolis Junction, Maryland—a government facility that processes TWIC data. The power outage caused a hardware component failure in the TWIC enrollment and activation system for which no replacement component was on hand. Consequently, data associated with individual TWICs could not be accessed or processed. As a result of this failure, (1) credential activations were halted until late November 2008 and several TWIC compliance dates originally scheduled for October 31, 2008 were postponed; and (2) the failure affected TSA’s ability to reset the PINs (i.e., provide users with new PINs) on 410,000 TWIC cards issued prior to the power failure. Consequently, TSA will have to replace the cards for cardholders who forget their PINs instead of resetting these PINs. TSA does not know the full cost implications of the power failure at the data center because it is unknown how many of the 410,000 TWIC cards will need to be replaced. Moreover, TSA cannot determine how many of the TWIC cards need to be replaced until all uses for PINs are identified at facilities across the country. For example, one use that will affect the number of TWICs TSA will need to replace is dependant on the number of MTSA-regulated facilities and vessel operators that will require the use of PINs to confirm an individual’s identity prior to integrating the user’s TWIC into the facility’s or vessel’s access control system. Officials from two ports we met with stated that the PIN reset problem had caused delays in their system enrollment process, as several enrollees could not remember their PINs and needed to request new TWICs. As of August 1, 2009, TSA reported that 1,246 individuals had requested that their TWIC cards be replaced due to TSA’s inability to reset the PINs. While TSA addressed the PIN reset issue by replacing TWICs free of charge, we estimate that it could cost the government $24,920 to issue new cards to these individuals and cost the industry $54,375 in lost personal and work productivity because of time related to the pick-up and activation of the new TWICs. If all 410,000 affected TWIC cards need to be replaced, it could cost the government and industry up to approximately $26 million. If TSA had planned for a potential TWIC system failure in accordance with federal requirements in contingency planning and internal control standards, it might have averted the system failure that occurred in October 2008. Federal guidance includes having an information technology contingency plan, disaster recovery plan, and supporting system(s) in place. The type of system failure that TSA experienced indicates that TSA did not meet federal requirements for minimal protections for federal systems, which include applying minimum security controls with regard to protecting the confidentiality, integrity, and availability of federal information systems and the information processed, stored, and transmitted by those systems. For example, TSA did not have an information technology contingency plan or disaster recovery plan in place to address a potential TWIC system failure. To minimize the effects of losses resulting from system failures, such plans should provide procedures and capabilities for recovering a major application or facilitate the recovery of capabilities at an alternative site. Moreover, TSA did not have the capabilities or supporting systems in place for recovering the computer system that houses the TWIC data. Nor did TSA have an alternate computer system in place to minimize the effects of a TWIC system failure. The lack of an approved contingency plan has been a longstanding concern as identified by the DHS Office of Inspector General. In July 2006 the DHS Inspector General identified that a systems contingency plan for TWIC had not been approved or tested. According to TWIC program management officials, they did not previously implement an information technology contingency plan or develop a disaster recovery plan or supporting system(s) because they did not have funds to do so. Currently, TSA has no effort underway for implementing a contingency plan. However, according to TSA senior officials, they intend to initiate the development of a disaster recovery plan at the beginning of fiscal year 2010. No documentation has been provided, however, to illustrate progress in developing a disaster recovery plan. TSA has, however, identified the lack of a system to support disaster recovery as a risk and has plans to develop one by 2012. While preparing to initiate the development of a disaster recovery plan in the next year and a system to support disaster recovery by 2012 is a positive step, until such plans and system(s) are put in place, TWIC systems remain vulnerable to similar disasters. Coast Guard employed strategies to help the maritime industry meet the TWIC national compliance date while not disrupting the flow of commerce. The strategies utilized included using rolling compliance dates and a TWIC temporary equivalency. The TWIC temporary equivalency included allowing workers to gain entry to secure areas of MTSA-regulated facilities/vessels for a limited time without a TWIC by showing proof of, for example, TWIC enrollment and evidence that the individual requesting access had passed the security threat assessment. Below are several examples of the Coast Guard’s strategies. Rolling Compliance Dates. To help ensure that all MTSA-regulated facilities were in compliance by April 15, 2009, the Coast Guard required affected facilities to comply with TWIC requirements ahead of the national compliance date on a staggered basis. (See appendix III for the TWIC compliance schedule.) According to officials from Coast Guard, TSA, and DHS, in executing the rolling compliance approach, Coast Guard required ports with a lower population of TWIC users to comply first, expecting to learn from experiences at these ports prior to requiring compliance at ports with larger populations. For example, the first TWIC deadlines were established for ports in Northern New England, Boston, and Southeastern New England, where Coast Guard anticipated a lower population of TWIC users. The largest ports, which TSA believed would present more of a challenge—the Port of New York and New Jersey, and the Ports of Los Angeles and Long Beach— had TWIC program deadlines of March 23 and April 14, 2009, respectively. Together, these three ports represent 46 percent of total U.S. container volume. TWIC Temporary Equivalency. In accordance with a policy decision, Coast Guard allowed the use of a TWIC temporary equivalency—or documentation other than an actual TWIC—for a limited time, prior to the national compliance date, to allow TWIC applicants who had passed the security threat assessment access to secure areas of MTSA- regulated facilities/vessels. For example, in Captain of the Port Zone Corpus Christi, the local TWIC compliance enforcement date was November 28, 2008. According to a local Coast Guard official, the sector accepted either the TWIC card or proof that the individual met the temporary equivalency criteria even though they had yet to receive an actual TWIC. This approach was in line with the Coast Guard’s desire to ease the administrative burden on maritime workers. Similarly, in Captain of the Port Zone New York, the Coast Guard authorized MTSA-regulated facilities to use a temporary equivalency at their discretion for those individuals in the same situation. Individuals meeting the criteria described above were eligible to continue to access MTSA-regulated facilities until April 15, 2009. On April 1, 2009, the Coast Guard published an update to the policy decision allowing individuals who had enrolled in the TWIC program but had not received their TWIC to be eligible for access to facilities in five Captain of the Port Zones through May 2009 if they met the applicable criteria described above, which includes passing the TSA background investigation. Similarly, due to card issuance challenges and potential activation back-logs for mariners, on May 28, 2009, the Coast Guard published a new policy decision allowing all U.S.-credentialed mariners eligibility for access to specified U.S. vessels and facilities until July 15, 2009, under similar criteria for the temporary equivalency described above. The Coast Guard and port strategies also helped to enroll workers in the TWIC program by the national compliance date, and helped to minimize compliance related issues through other strategies. For example, during the first 3 days of compliance for Captain of the Port Corpus Christi, from November 28 through 30, 2008, the Coast Guard conducted 25 spot checks at various facilities, during which they inspected 550 workers. Of these, 430 (78 percent) had their TWIC cards and an additional 109 (20 percent) workers were enrolled but had yet to receive their cards. No trucks or employees were denied access for lack of a TWIC. Similarly, when Captain of the Port Zones Miami, Key West, and St. Petersburg reached their local compliance date on January 12, 2009, the Coast Guard conducted spot checks of 890 workers from January 13 through January 15, 2009. Of the 890 workers, 709 (80 percent) possessed TWIC cards, and an additional 164 workers, or 18 percent, were enrolled but had not received their cards. In addition, during compliance inspections in Captain of the Port Zone Miami, five cargo facilities were found to be noncompliant. Of the five, two were brought into compliance immediately upon identification of the compliance issue with no impact to operations, and three were ordered to suspend MTSA-related operations until they complied with the TWIC requirements. As a result of the suspensions, these facilities could not accept any additional MTSA-regulated vessels until conditions required by the Captain of the Port were met. The Coast Guard worked with the three non-compliant facilities and all were cleared to resume MTSA operations within 2 days. According to one port authority official, the small number of workers and trucks turned away from ports and facilities on the various compliance dates may have been attributable to various factors, such as non-TWIC holders not attempting to enter port facilities, the impact of reduced port traffic due to the downturn in the economy, or facilities providing escorts for non-TWIC holders. Maritime ports across the country also implemented different strategies for meeting their respective TWIC compliance date. Strategies included, among others, enacting compliance exercises ahead of the scheduled compliance date to help identify and address any potential implementation issues that would arise, and requiring a TWIC as part of meeting other locally mandated requirements, such as obtaining a local credential that confirms an individual’s eligibility to access a port’s facilities. While the official local compliance date for Captain of the Port Zone Baltimore was December 30, 2008, the Maryland Port Administration announced that a TWIC would be required for unescorted access to all Maryland Port Administration facilities beginning December 1, 2008, to help sensitize workers to the need to obtain a TWIC. As a result, Baltimore officials reported that most potential compliance issues were addressed in advance of the official local compliance date. As of January 15, 2009, the Port Authority of New York and New Jersey made the possession of a TWIC a prerequisite for obtaining or renewing a SeaLink Card—a local credential required by the port authority to verify which drivers are eligible to access facilities under the port authority’s jurisdiction. According to port authority officials, by the port’s March 23, 2009, local compliance date, over 7,000 of the estimated 8,000 truck drivers and International Longshoremen’s Association members that conduct ongoing business at the port had met the requirement. As a result, according to port authority officials, New York did not experience an interruption to commerce on the March 23, 2009, local compliance date. At the Ports of Los Angeles and Long Beach, a clean truck program required truckers doing business at the port to also obtain a TWIC by October 1, 2008, in order to participate in the program. As a result, the program requirement helped enroll truck drivers—a population of concern for TWIC program officials—well ahead of the national April 15, 2009, compliance date for the two ports. Although TSA has made significant progress in incorporating best practices into TWIC’s schedule for implementing the reader pilot program, weaknesses continue that limit TSA’s ability to use the schedule as a management tool to guide the pilot and accurately identify the pilot’s completion date. Moreover, developing a sound evaluation approach for collecting information on the pilot’s results could strengthen DHS’s approach to help ensure the information collected is accurate and representative of deployment conditions. As we have previously reported, the success of any program depends in part on having a reliable schedule that defines, among other things, when work activities will occur, how long they will take, and how they are related to one another. As such, the schedule is to not only provide a road map for the systematic execution of a program, but also provide the means by which to gauge progress, identify and address potential problems, and promote accountability. Among other things, best practice and related federal guidance call for a program schedule to be program- wide in scope, meaning that it should include the integrated breakdown of the work to be performed by both the government and its contractors over the expected life of the program. management include sharing documents such as the schedule with stakeholders to attain their buy-in and confirm that the schedule captures the agreed upon activities, time estimates, and other scheduling elements needed to meet project objectives. Best practices also call for the schedule to expressly identify and define the relationships and dependencies among work elements and the constraints affecting the sta and completion of work elements. A well-defined schedule also helps to Moreover, best practices in project identify the amount of human capital and fiscal resources that are needed to execute a program. See, for example, GAO-09-3SP; and OMB, Capital Programming Guide V 2.0, Supplement to Office of Management and Budget Circular A-11, Part 7: Planning, Budgeting, and Acquisition of Capital Assets (Washington, D.C.: June 2006). 3. assigning resources to all activities—identifying the resources needed to complete the activities; 4. establishing the duration of all activities—determining how lon 5. activity will take to execute; integrating all activities horizontally and vertically—achieving aggregated products or outcomes by ensuring that products and outcome the right order, and dates for supporting tasks and subtasks are aligned; s associated with other sequenced activities are arranged in critical path for all activities—identifying the path in 7. the schedule with the longest duration through the sequenced list of key activities; identifying float between activities—using information on the amount of time that a predecessor activity can slip before the delay affects successor activities; ucting a schedule risk analysis—using statistical techniques to predict the level of confidence in meeting a project’s completion date; and 9. updating the schedule using logic and durations to determine the dat tinuously updating the schedule to determine ealistic start and completion dates for program activities based on for all activities—con r current information. See appendix IV for a more detailed explanation of each scheduling anagement dated practice. In a memo from the DHS Under Secretary for M July 10, 2008, DHS endorsed the use of these practices and noted that DHS would be utilizing them as a “best practices” approach. TSA has made significant progress during the course of our review in incorporating best practices into the schedule for implementing the TWIC pilot program, although weaknesses continue to exist. Specifically, in response to limitations that we identified and shared with TSA’s program office, the program office developed a new TWIC pilot integrated master schedule in March 2009, and updated it in April 2009, and again in May 2009. As figure 2 illustrates, the pilot schedule went from not meeting any of the nine scheduling best practices in September 2008 to fully addressing one of the practices, addressing seven practices to varying degrees, and not addressing one practice. According to TSA program officials, prior to GAO’s first review of the schedule in September 2008, they had not followed best practices in schedule management because they did not have enough staffing resources to meet these practices. However, pr officials had not developed a workforce plan to determine the numbe resources needed to carry out the pilot because, accordin g to these officials, they knew that only two TSA employees and no additional contract staff would be available to perform this work. The four areas where TSA’s schedule made the most improvement towa addressing the technical aspects of scheduling best practices include (1) sequencing all activities; (2) integrating schedule activities horizont ally and vertically; (3) establishing the critical path for all activities; and (4) rd identifying float between activities. For example, in sequencing all activities, the activities identified in the schedule were linked to a single end milestone and pilot sites are no longer scheduled to finish submitting pilot test data on a federal holiday, December 25, 2009—Christmas Day. Furthermore, with regard to integrating the schedule horizontally and vertically, activities contained at different levels of the schedule can now be viewed in relation to each other. In addition, the schedule now identifies a critical path, which is useful for determining which activities are critical for meeting the pilot’s completion date. Finally, the float time identified—or amount of time an activity can be delayed before affecting the project finish date—improved, allowing for a better assessment of the time that each activity can slip before the delay affects the project finish date. For example, one activity in the schedule went from having 249 days of float identified to 59. While TSA has improved its technical application of program scheduling practices on the TWIC reader pilot program, as of May 2009, weakne sses remain that may adversely impact its usefulness as a m presenting clear insight as to the progress in each phase of the pilot assessment. Weaknesses exist in the following areas: Capturing all activities. The schedule does not accurately reflect all key pilot activities. For the TWIC pilot, there is no centralized, consolidated document, such as a statement of work, that captures all key activities and can be referred to in order to help assure all intended activities are completed and outcomes achieved for each phase of the pilot testing. While TSA officials acknowledge that each pilot site may take different steps in preparing for and executing the pilot, they said that the assumption applied in developing the schedule is that similar steps are being taken at each site even though each pilot has adopted varying approaches. Moreover, contrary to best practices in program management, the schedule has not been shared with and reviewed by key stakeholders at the pilot sites to capture the varying conditions, pilot related activities, at each site. Key stakeholders at the pilot sites would, for example, be able to (1) identify areas that did or did not appropriately describe the full scope of their efforts; (2) identify how the activities at their pilot sites would enable or hinder meeting the activities identified by TSA; and (3) validate the activities identified by or TSA and durations of the activities. For example, the schedule include iew. To having each pilot site complete an environmental related rev ensure consistency with federal environmental and historic preservation policies and laws, it is FEMA’s policy to require, for example, environmental reviews of each pilot participant in order t receive federal grant funding. However, depending on the level of review to be conducted, it may require more or less effort, or activities, from each grant participant and FEMA to complete. However, the pilot schedule does not account for the activities required to meet the FEMA required environmental reviews or consistently capture the amoun time such reviews would take relative to the level of review to be conducted. Without capturing all activities, TSA’s schedule will be inaccurate, thus, hindering its usefulness a guiding the pilot and measuring progress. s a management tool for Assigning resources to all activities. The current schedule does not fully identify the resources needed to do the work or their availability. For example, the schedule does not identify the labor, material cos ts, and other direct costs needed to complete key activities. Instead, resources are assigned to activities at the organization level (e Vendor). TSA officials stated that they do not have complete information on or control over the required resources because TSA does not “own the resources” since pilot activities are completed by non-DHS participants, and some funding is provided through FEMA Port Security and Transit Security Grant programs. However, th should not preclude the TWIC program office from gaining an understanding of what the overall resource requirements are completing the work. Individual stakeholders, such as pilot participants, could in part be the source of this information. Moreover, while TSA expressed concern over their ability to identify resources for the pilot in the schedule, officials at pilot sites told us that they had trouble planning for the pilot and allocating resources because they d not fully understand what the pilot was to entail, therefore making it difficult to effectively plan for and identify the needed resources. Establishing the duration of all activities. The pilot schedule incl duration figures (that is, information on how long each activity is expected to take to perform), but they may not be reliable. According to TSA officials, target dates are discussed with participants for some activities, such as when to start a phase of testing. However, since thepilot program implementation schedule, or relevant segmen schedule, and related updates are not shared with the pilot participants, it is not clear if the durations TSA’s program office associated to each activity are realistic or up-to-date. For example, nearly 86 percent (259 of the 302 activities) of the activities identified the schedule are based on a 7-day calendar that does not account for weekends or holidays. While normal operations at pilot sites may occu on a 7-day schedule, resources for conducting pilot activities such a installing readers and associated infrastructure such as cables and computers or analyzing the results of pilot data may not be available the weekend. By using a 7-day schedule, the schedule inaccurat represents approximately 28 percent more days per year being available to conduct certain work than is actually available. Best practices in project management include having stakeholders agree with project plans, such as the schedule. Because the schedule is no shared with the individual pilots, responsible pilot officials have not been afforded the opportunity to comment on the viability of the 7-day schedule given available resources. Therefore, pilot participants not have the resources, such as employees available to work on weekends, in order to meet pilot goals. As such, if an activity is define as taking 60 days, or approximately 2 months using a 7-day calendar, the reality may be that participants work a 5-day work week and as a result the activity takes longer than scheduled. approximately 3 months to complete—1 month TSA program management officials told us that they believe the impact tand of using a 7-day versus 5-day calendar is minimal since they unders their key milestones and are committed to meeting the dates they established. Moreover, according to TSA officials, while knowledge of when a task would be completed is important to TSA’s managemen the pilot, the level of effort (e.g., number of hours) required by the grantees or their contractors to complete the work is not. However, not having a full understanding of how long activities will take to comple has already had an adverse impact on the resource allocation at the Port of Brownsville pilot site. Port officials in Brownsville told us th to meet the date for initiating pilot testing, their contractors had t work unplanned hours to install electrical wiring and fiber optic communication cable needed for the TWIC readers to work. T he contractor stated that this required overtime pay, a resource expenditure that was not planned. Therefore, although program management officials may have insight into the schedule using the 7- day approach, the cumulative effect of planning multiple activities to be completed on non-workdays increases the risk that activities will not be completed on time with available resources. Since pilot participants are working on a 5-day schedule, there is a greater risk that key program milestones will not be met, thereby perpetua ting inaccuracies in the schedule, and reducing its usefulness as a management and communica completed as TSA intended. tion tool for ensuring that activities are Conducting a schedule risk analysis. TWIC program officials have not performed a schedule risk analysis for the pilot schedule because they do not believe it to be necessary. For the TWIC pilot, a schedule risk analysis could enable the program to model “what if” scenarios as to when and if locations such as Long Beach will complete their preliminary work and the effects that schedule changes, if any, might have on meeting the pilot reporting goal. A schedule risk analysis could also help facilitate detailed discussions between the TWIC program office at TSA and the individual pilot locations regarding task durations and expected progress. This is especially relevant for the TWIC pilot given that the schedule does not clearly articulate all of the tasks that need to be completed to carry out the pilot, or changes that may result due to the availability of funding. For example, according t TSA officials and one pilot participant, such changes included delays in FEMA’s approval of pilot participants’ award contracts to allow the grantees to expend grant funds. In any program that lacks a schedule risk analysis, it is not possible to reliably determine a level of confidence for meeting the completion date. Updating the schedule using logic and durations to determine th dates for all key activities. The pilot schedule is missing several elements needed to reliably use logic and durations to continuously update the schedule and determine revised dates for all key activities. Implementing this practice is reliant upon other scheduling practices, such as capturing all activities, assigning resources to all activities, establishing the duration of all activities. However, the TWIC pilot schedule has not yet fully addressed each of these practices. Thu s, schedule updates may not result in reliable dates. Moreover, the current schedule includes date anomalies, such as identifying task to be started as already having started, and includes 18 activit scheduled in the past for which no actual start date has been identified. For example, the schedule indicates that three activities at the Staten Island pilot site have started on a future date yet to occur These anom schedule. . alies indicate the presence of questionable logic in the Contrary to best practices in program management, as of August, 2009, TSA had not shared the pilot schedule, or at least relevant segments of the schedule, with pilot participants—all key stakeholders whose buy-in—th at is commitment and resources—is needed to ensure that pilot goals and time frames are met. Benefits of sharing the schedule with stakeholde include, for example, confirming the activities needed to complete the pilot, associated resources, activity durations, the viability of attaining milestone dates, and potential risks for schedule slippages. Furtherm the schedule can serve as a valuable communication tool by helping stakeholders in their individual planning efforts. According to TSA officials, they do not see the value in providing the schedule to pilot participants because it contains too much information. Further, TSA officials told us that they have not shared the schedule with pilot participants due to concerns about sensitive information related to when the pilot results will be provided for congressional review. Lastly, TSA is uch as also concerned that the pilot participants will not have the tools, s Microsoft Project, available to read and understand the schedule. However, sharing the schedule with pilot participants in a format read by all can be accomplished using tools such as email or by providing participants with a paper copy. Moreover, to overcome sensitivity TSA could provide participants with the segment of the schedule applicable to the pilot participant and separately inform them of their impact on the overall schedule. Furthermore, having pilot participants, as issues, stakeholders, confirm the viability of key dates and duration of activities, and illustrating the impacts that schedule slippages on any one act have on meeting pilot goals and reporting deadlines, can enhance collaboration and communication, help participants in their individual planning efforts, and help minimize future schedule slippages. Without doing so, TSA runs the risk of continuing to manage the program base an unreliable schedule, further delaying the development of the card reader rule and implementation of the TWIC program with biometric card readers. Since September 2008, TSA has revised its schedule for completing the TWIC reader pilot from October 13, 2009, to a year later, October 4, 2010. Consequently, TSA’s current schedule indicates that th will not meet the April 2010 deadline for reporting to Congress on the results of the TWIC reader pilot. Shortfalls in TWIC pilot planning have presented a challenge for TS Coast Guard in ensuring that the pilot is broadly representative of deployment conditions, and will yield the information needed to inform Congress and a card reader rule aimed at defining how TWICs will be used with biometric card readers. This is in part because an evaluation plan th at fully identifies the scope of the pilot and the methodology for collecting and analyzing the information resulting from the pilot has not been developed. Agency officials told us that no such evaluation plan was developed because they believe that the existing pilot documentation coupled with subject matter expertise would be sufficient to guide the pilot and no evaluation plan is needed. However, our review of the TWIC pilot highlights weak nesses that could be rectified by the development of an evaluation plan. In informing the card reader rule, the TWIC reader pilot is to, among othe things, test the technology, business processes, and operational impacts required to deploy card readers at secure areas of the marine r transportation system. Specifically, the testing is to assess how the T performs when used in conjunction with biometric card readers and systems at maritime facilities and vessels, how the technology performs when used as part of the pilot sites’ normal business processes, and to help identify the operational impacts of deploying biometric card readers based on these locations. The pilot results are to help identify the actionsnecessary to ensure maritime facilities and vessels can comply with the TWIC regulation that is currently being drafted known as the card rea rule. provide information needed for developing the regulatory analysi required by the Office of Management and Budget as part of the rulemaking process. The regulatory analysis is to demonstrate that examinations of the most efficient alternatives were considered and an evaluation of the costs and benefits—or impacts—to be borne by the government, private sector, a regulation were considered. The regulation requiring the use of a TWIC for accessing MTSA-regulated facilitie essels was issued on January 25, 2007. pilot and analysis of the results. At a minimum, a well-developed, sound evaluation plan contains several key elements, including (1) clear objectives, (2) standards for pilot performance, (3) a clearly articulated methodology, and (4) a detailed data analysis plan. Incorporating these elements can help ensure that the implementation of a pilot generates performance information needed to make effective management decisions. In planning for and designing the TWIC pilot, DHS—including TSA, Coas Guard, and its Science and Technology Directorate—developed a t evaluation master plan consisting of several documents. Together, the TWIC pilot documents address key evaluation plan elements to varying degrees. These documents are useful for identifying planned data collection methods. However, addressing several shortfalls in their planning efforts—such as omissions in the planning methodology a absence of a data analysis plan to help guide information collection efforts—could strengthen the usefulness of the information collected through the pilot. The following discusses the extent to which key elements are addressed in the TWIC pilot program documentation. Clear objectives. TWIC pilot documentation identified general program objectives, referred to as the program goals. TWIC program objectives include (1) conducting tests of biometric card readers and the credential authentication and validation process to evaluate the reader specification and (2) testing the technology, business processes, and operational impacts required to deploy TWIC readers on facilities and vessels prior to ; issuing a final rule. The objectives, as stated, articulate the key goals for the pilot. Identifying clear objectives for an evaluation can help ensure that the appropriate evaluation data are collected and that performance can be measured against the objectives. Performance standards. TSA in conjunction with the Coast Guard developed standards for determining performance for the TWIC pilot, but the standards do not fully address important aspects of the pilot assessment, such as those needed to assess the business and operational impacts of using TWIC with biometric card readers. For example, the master plan identifies some operational performance requirements, suc as a minimum reliability threshold, that the card reader is to meet. The plan also identifies technical requirements readers are to meet, such asmeeting specific biometric standards or, for example, transaction times. However, the performance standards mostly focus on technology and do not fully identify standards for the business and operational circumstances that using TWIC with biometric card readers will demand. Business an d operational circumstances include, for example, the experience a worke will have when attempting to access a secure area of a MTSA-regulated facility, additional steps a worker may need to take to successfully enter a facility, or changes to business processes to accommodate the use of TWIC with readers. Neither the master plan nor subsequent test plans identify performance standards for assessing business and operational performance. For example, there is no test for when a user presents a g valid but non-functioning TWIC at an access-control point, and assessinthe impact of that scenario on the flow of commerce. TSA officials stated that they had not included this test in the pilot but would consider adding it and others we identified as part of their pilot test. In addition, DHS noted that they expect to identify the business and operational impacts that occur during respective phases of the pilot. While identifying and collecting information on activities as they occur during a pilot can enhance address important aspects of the pilot assessment could strengthen DHS’s efforts in determining to what extent the piloted methods are effective the amount of data collected, incorporating criteria that fully . Clearly articulated evaluation methodology. The methodology for evaluating the TWIC pilot is not fully defined and documented, does not account for differences in pilot design, may not be representative of future plans that individual port facilities have for using TWIC, and does not provide for testing some of the known requirements under consideration for inclusion in the card reader rule. Thus, such weaknesses may adve impact the sufficiency and reliability of the information collected from pilot. The unit of analysis for conducting the pilot, pilot site selection criteria, and the sampling methodology are not fully defined and documented. The unit of analysis—or the level at which the analysis is ies to be conducted—had not been defined prior to selecting the facilit and vessels to participate in the TWIC pilot. Specifically, while TSA and Coast Guard intended the unit of analysis to be focused on secure areas, they did not determine whether analysis of pilot test results would be conducted at the port level, facility/vessel level, or the acce control point level. As we have previously reported, defining the unit analysis for any evaluation is particularly important because the resu from such an effort will vary depending on this. With regard to the TWIC pilot, the pilot’s assessment could focus on many different unit of analysis. For example, the pilot could be designed to assess the results at a more aggregate level, such as accessing a secured area in its entirety, such as an entire port, facility or vessel. Or, the pilot could focus on the use of readers based on a particular function, such as at tranceways for boarding a cruise liner. When trucking lanes or at endesigning an evaluation, such as a pilot, it is important to define the unit of analysis and how it may be aggregated at an early stage. This increases the likelihood that the information collected is representative of the information needed for evaluation and can be used to project similar experiences elsewhere. Moreover, as we have previously reported, confronting data collection analysis issues during the design stage may lead to a reformulation of the questions to be addressed as part of an evaluation to ones that can be answered within the time and resources available. TSA officials told us that no specific unit of analysis, site selection criteria, or sampling methodology was developed or documented prior to selecting the facilities and vessels to participate in the TWIC pilot. According to TSA officials, they did, however, take the following factors into account when selecting grant recipients to participate in the pilot: (1) the TSA Deputy Secretary suggested including the ports of Los Angeles and Long Beach because they are large volume operations; (2) the Port Authority of New York and New Jersey was selected because of weather conditions and the great mix of traffic (e.g., cargo containers, bulk commodities, and passenger vessels); and (3 ) the Port of Brownsville was considered because it was in the Gulf region of the While these general United States and it represents a smaller port. factors were used for selecting the grant recipients to participate in the pilot, the selection factors did not take all evaluation factors into account, such as ensuring that certain types of facilities with specified risk rankings would be selected at each port to facilitate the comparison of pilot results between the different locations. According to TSA officials, they did not identify more specific selection criteria based on the unit of analysis to be evaluated because they believed the e factors that they did consider would produce the breadth of maritim operations needed to conduct the pilot. Further, they stated that they could meet evaluation needs by subsequently identifying facilities an vessels at the pilot sites by the type of business they represented (i.e container facility, liquid storage facility). However, the pilot documentation does not identify if and how the operations of facilities and vessels at one pilot site are to be compared with those at another site or how the pilot or subsequent evaluation approach is to compensate for the additional factors. For example, additional factors that may impact the ability to compare different site may include the size of the operation or business processes in place. Moreover, according to TSA officials, they now believe that because TSA and Coast Guard had to rely on volunteer MTSA-regulated facilities and vessels to participate in the pilot, they were limited in their ability to ensure the adequacy of the number and type of selected facilities and vessels for the pilot. The pilot documentation, however, does not yet identify perceived shortcomings with the selected pilot participants, methods for compensating for perceived shortcomings, or evaluation methods to be used to ensure data collected at pilot sites s will be comparable and will be representative of the experience of implementing TWIC with biometric card readers across the nation. Further, the documentation does not identify the unit of analys define how data are to be analyzed, or how the pilot results are to be compared or contrasted between types of locations, facilities/vessels,or functions. The lack of planning documentation makes it difficult to judge the basis for pilot selection, related constraints, or the ext ent to which corrective actions have been subsequently applied to straints. Given that the existing compensate for the earlier con evaluation plan documentation does not identify the unit of analysis, define how data are to be analyzed, or how the pilot results are to be compared or contrasted between types of locations, facilities/ve or functions, there is a risk that the selected pilot sites and test methods will not result in the information needed to understand the impacts of TWIC nationwide. Differences in pilot designs are not accounted for. The pilot test and evaluation documentation does not identify how differences in individual pilot site designs and resulting variances in the information collected from each pilot site are to be assessed. This has implications for both the technology aspect of the pilot as well as the business and operational aspect. For instance: While TSA is applying some controls over the technologies tested individual pilot sites, it has not identified how the pilot is to compensate for the different technologies tested at each site. For example, as part of its initial capability evaluation, TSA tested a se number of readers to ensure they met certain performance parameter Furthermore, pilot participants were asked to choose readers that passed the initial capability evaluation. While TSA controlled the population of readers pilot participants could select from, it did not control for alterations made to readers at pilot sites to optimize rea performance or differences in the computers, software, or access control systems with which pilot sites are integrating TWIC readers. s. Thus, it will be difficult for TSA and the Coast Guard to extrapolate how the use of TWIC-related technologies will be expected to impact nsating the maritime environment as a whole without applying compe strategies to control for variances to some of these variables. For instance, by not controlling for key variables, such as how a particular site integrates readers with its existing access control system, pilot results may show that a delay related to the use of biometric ca readers was incurred, but not appropriately identify the root cause the delay (e.g., the reader itself or the integration approach). of Business and operational processes and pilot approaches are not the same at each pilot site and a methodology for compensating for the differences has not been developed, thereby complicating the assessment of the results. For example, officials at the Port of Los Angeles said they intend to test all access points at the three MTSA- regulated facilities participating in the pilot test. In contrast, the testing approach at the Port Authority of New York and New Jersey currently includes testing one function at different facilities—such as testing a TWIC reader at 2 of 31 truck lanes at one facility and testing a turnsti in a high volume location at a different facility—instead of all access points at each facility. Further, testing at each port will not necessarily coincide with the time of year with the highest volume of cargo or the environmental conditions for which the pilot sites were selected (e.g., New York in the cold winter months, Brownsville, Texas, during the hottest and most humid months). Without a methodology for compensating for these differences, the information collected be comparable or captured in a manner that can be aggregated to assess the impact of TWIC reader deployment on maritime commerce across the nation. According to DHS officials, they understand that t and other limitations exist with the pilot. However, they have decide to proceed with the pilot in this manner, collecting whatever information they can instead of all the information that is needed, because of funding issues. These funding issues include not having the resources to test for every situation they would like and not having control over how pilot participants use the dollars available for the pilot. However, pilot planning documentation does not identify the resources needed to conduct testing for the additional situations, the additional situations TSA and DHS would like to test for, or the testin that will not occur because of insufficient resources. Moreover, TSA e and FEMA do have some controls in place to ensure participants us some of the grant funds for the pilot. For instance, as part of the grant process, pilot participants submitted investment justifications to FEMA for approval which were reviewed and approved by FEMA. TSA provided a copy of each reviewed the grantees’ plans. Furthermore, pilot participants must submit budget and expenditure reports. Given these steps in the grant was justification and both TSA and Coast Guard management process and coordination between FEMA and T SA, DHS could exert some control over how participants use the dollars available for the pilot. Pilot site test designs may not be representative of future plans for using TWIC. Pilot participants are not necessarily using the technologies and approaches they intend to use in the future when TWIC readers are implemented at their sites. In accordance with best practices, pilots should be performed in an environment that is characteristic of the environment present in a broadscale deployment.However, officials at two of the seven pilot sites told us that the technology and processes expected to be in place during the pilo likely not be the same as will be employed in the post pilot environment, thereby reducing the reliability of the information collected at pilot locations. For example, officials we spoke with at pilot site told us that, during the pilot, the site intends to use a hand held reader solution, but plans to install fixed readers requiring additional investment in technology infrastructure after the pilot is complete. They are taking this approach because they want to participate in the pilot, but do not want to invest heavily in a for the pilot that may not work. As a result of this approach, the information collected from this pilot participant will not be representative of the technology, processes, and cost impacts that implementation of TWIC with biometric card readers will have location. Moreover, use of the results captured from this pilot site may hinder the reliability of impact projections made based on this information. Officials at a third pilot site told us that they are using the cheapest solution possible for the pilot because they do not believe that the use of TWIC will ultimately be applicable to them. They s that they likely to have to implement the use of TWIC with biometric card readers. would, however, select a different approach if they were The pilot methodology is not analyzing or testing some of the potential requirements under consideration for inclusion in the reader rule. On March 27, 2009, the Coast Guard published the Advanced Notice of Proposed Rulemaking (ANPRM) for the card reader rule. The ANPRM identifies the requirements under consideration, as defined by the Coast Guard, for deploying TWIC readers at MTSA-regulated facilities and vessels that would be potentially included in the card reader rule on using TWICs with card biometric card readers. As such, the ANPRM presents some of the technology, business, and operational requirements that are being considered in developing the card reader rule. Moreover, they represent potential costs and benefits—or impacts—to be borne by t government, private sector, and population at large as a result of the regulation being considered. As such, they are representative of the characteristics that should be included in conducting the TWIC pilot to help ensure that maritime facilities and vessels for which the rule will apply can fully comply with the TWIC rule. However, our review of th ANPRM again st the pilot documentation found that the pilot does not address or test some requirements under consideration for the card reader rule. Of the 27 potential requirements contained in the ANPRM that we assessed, 6 (22 percent) were being tested in the pilot, 10 (37 percent) were partially being tested, and 11 (41 percent) were not being tested (see appendix V for more detail). For example, one potential requirement in the ANPRM is that owners and operators of facilitie the highest risk group may require PINs as an additional level of security. However, the pilot does not test the use of PINs and the associated impacts the use of PINs could have on access control processes, such as increased waiting times for accessing secure areas or shipping delays. Similarly, another potential requirement being considered in the ANPRM but not tested for in the pilot includes requiring that those owners and operators using a separate physical access control system identify how they are protecting personal identity information. However, the pilot does not test for the impacts o added security on systems to prevent the disclosure of personal f identity information. Such impacts could include, for example, a down of system speed for processing a TWIC holder and costs associated with ensuring the actual security of the information maintained in a system. Both of these potential requirements, if implemented, could have operational, technical, and cost implications for maritime commerce. TSA officials told us that they plan to use the results of our an help them identify additional requirements for testing in the pilot. According to Coast Guard officials, they did not assess each requirement under consideration in the ANPRM against the TSA test documents. Instead, they assessed selected requirements identified i the summary table in the ANPRM. They said that they plan to supplement the information the pilot provides with data from other sources. While supplementing the information collected can be beneficial, designing the pilot to collect the most information possible about those requirements under consideration for the card reader rule could enhance TSA and Coast Guard’s understanding of the viability of certain requirements and related limitations. Detailed data analysis plan. TSA has not developed a detailed data analysis plan to describe how the collected data is to be used to track the program’s performance and evaluate the effectiveness of using TWIC with biometric card readers. Moreover, the available plans do not identify the criteria, methodology, unit of analysis, and overall approach to be used in analyzing the pilot data to ensure that the needed information will result from the pilot. As we previously reported, a detailed analysis plan is a key feature of a well-developed, sound evaluation plan as it sets out who will re do the analysis and when and how the data is to be analyzed to measu the pilot project’s performance. Because the information from the pilot to be used to identify the impact of using TWICs with biometric card ing readers at maritime facilities and inform the card reader rule (includthe related regulatory analysis), a detailed data analysis plan could help ensure that the implementation of the pilot generates performance information needed to make effective management decisions. Without such a plan, it will be difficult for TSA and Coast Guard to validate the results from the pilot and ensure the accuracy and use of the information. Consequently, the resulting information may not allow others—such as Congress or external parties affected by the regulation—to independently assess the results and make conclusions about the impacts—includin g costs and benefits—of implementing TWIC with biometric card readers . Because the pilot may not provide all of the information needed for implementing the card reader rule and supporting regulatory analysis, Coast Guard officials told us that they would be supplementing the data collected from the TWIC pilot after the pilot is completed rather th adjusting the pilot approach to collect the information. According to Coas Guard officials, they plan to supplement TWIC pilot data by using techniques allowable under federal guidance for developing assessmen in support of a federal regulation. We agree that following the fed eral guidance should help inform the development of the card reader rule. However, TSA and Coast Guard officials have not identified how information collected outside of the pilot is to be used as part of the evaluation methodology. As we have previously reported, defining what an data is needed and how the data is to be used and assessed as part of evaluation plan can help to ensure information needs are met and properly considered. TSA and Coast Guard could, for example, augment the information collected from the pilot by leveraging information from other ports that are already or are about to begin using TWICs with biometric card readers. Augmenting the pilot with information from other facilities and vessels that have already implemented TWICs with biometric card readers could help TSA and the Coast Guard meet pilot objectives, and help ensure the pilot effectively informs the card reader rule. By identifying the additional information to be collected along with its source, as well as defining compared, TSA and Coast Guard can strengthen their efforts to inform the c the approach for how the information will be used and ard reader rule. TSA has made significant progress in enrolling, activating, and issuing TWICs. As of September 2009, over 1.3 million maritime transportation workers have been enrolled and over 1.1 million TWICs have been activated. Consequently, the enrollment and activation phase of the program for meeting the national compliance date of April 15, 2009, has reached completion. However, the data acquired from workers during this phase of the program and in the future needs to be adequately maintain ed so that the program can continue uninterrupted and the security aspects o the program can be realized. Since the TWIC system has already failed once—disabling TSA’s ability to reset PINs on TWICs and causing dela in the enrollment of workers and the activation of cards—an approved information technology contingency plan, disaster recovery plan, and supporting system(s) for the computers that store TWIC-related data coul help ensure the program’s continuity and effectiveness. While the DHS Inspector General identified the lack of an approved contingency plan in 2006, no steps have been taken to develop such a plan. TSA officials stated that they are planning to develop a disaster recovery plan in fiscal year 2010 and disaster recovery system by 2012. However, until a contingencyplan for TWIC systems, including a disaster recovery plan and supporting system(s) as needed are put in place, TWIC systems remain vulnerable. The potential security benefit of the TWIC program will not be fully realized until maritime transportation facilities install biometric card readers and integrate them with the facilities’ access control systems. Thepilot test, intended to inform this phase of the program and the regulation on the use of the card readers in the future, has a number of weaknesses that could negatively affect its rigor and timely completion. Specifically, weaknesses in the pilot schedule limit its usefulness as a management tool for executing the pilot, monitoring its progress, and determining the pilot’ nts completion date. Until the pilot schedule is shared with pilot participa and updated to accurately reflect realistic resource and time constraints, TSA will lack the management information needed progress towards meeting the planned completion date and pre-emptively identifying likely slippages in the completion date. To minimize the effects of any potential losses resulting from TWIC system failures, and to ensure that adequate processes and capabilities ar in place to minimize the effects of TWIC system interruptions, we recommend that the Assistant Secretary for the Transportation Security Administration direct the TWIC program office to take the following action: develop an information technology contingency plan for TWIC systems, including the development and implementation of a disaster recovery plan and supporting systems, as required, as soon as possible. To help ensure that the TWIC pilot schedule can be reliably used to guide e the pilot and identify the pilot’s completion date, we recommend that th y for the Transportation Security Administration direct Assistant Secretar the TWIC program office, in concert with pilot participants to take the following action: fully incorporate best practices for program scheduling in the pilot schedule to help ensure that (1) all pilot activities are captured; (2) sufficient resources are assigned to all activities; (3) the duration of all activities are established and agreed upon by all stakeholders; (4) a schedule risk analysis is conducted to determine a level of confidence in meeting the planned completion date planned activities within scheduled deadlines; and (5) the schedule is correctly updated on a periodic basis. and impact of not achieving To ensure that the information needed to assess the technical, business, and operational impacts of deploying TWIC biometric card readers at ent MTSA-regulated facilities and vessels is acquired prior to the developm of the card reader rule, we recommend that the Assistant Secretary for the Transportation Security Administration and Commandant of the U.S. Coast Guard direct their respective TWIC program offices to take the following two actions: develop an evaluation plan to guide the remainder of the pilot tha includes performance standards, a clearly articulated evaluation methodology—including the unit of analysis and criteria—and a data analysis plan. identify how they will compensate for areas where the TWIC reader pilot will not provide the necessary information needed to report to Congress and implement the card reader rule. The information to collected and approach for obtaining and evaluating information obtained through this effort should be documented as part of an evaluation plan. At a minimum, areas for further review include the potential requirements identified in the TWIC Reader Advanced Not of Proposed Rulemaking but not addressed by the pilot. Sources of information to consider include investigating the possibility of using information resulting from the deployment of TWIC readers at non- pilot port facilities to help inform the development of the card reader rule. We provided a draft of this report to the Secretary of Homeland Security for review and comment. DHS provided written comments on behalf of t department and the Transportation Security Administration, the United States Coast Guard, and the Federal Emergency Management Agency on November 5, 2009, which are reprinted in appendix VI. In commenting on our report, DHS stated that it concurred with three of the four recommendations and partially concurred with the other one and identified actions planned or under way to implement them. DHS is taking steps to address our first recommendation related to information technology contingency planning for TWIC systems; however, the actions DHS reported TSA and Coast Guard have taken or plan to take do not fully address the intent of the remaining three recommendations. With regard to our first recommendation, DHS concurred with our recommendation that TSA develop an information technology contingency plan for TWIC systems, including the development and implementation of a disaster recovery plan and supporting systems. DHS reported that TSA has taken actions to improve contingency planning and disaster recovery capabilities for TWIC related systems. According to DHS, such actions include adding TWIC systems enhancements, such as back-up systems (i.e., redundancy system), and plans for a system Continuity of Operations Plan (COOP) site as part of its Office of Transportation Threat Assessmen t and Credentialing’s infrastructure modernization effort. TSA’s actio develop a contingency plan for TWIC systems, including a disaster recovery plan and supporting system to recover operations in the future. s, should help enhance TSA’s ability DHS concurred in part with our second recommendation, that TSA, concert with pilot participants, fully incorporate best practices for program scheduling in the pilot schedule. In its response, DHS agreed that a program schedule is a critical management tool for implementation of the TWIC reader pilot, and notes that its implementation of best practice is tailored to specifically meet the requirements relative to the complex and unique constraints of the pilot program. For example, according to DHS, it focuses its outreach and coordination efforts on the completion hile key tasks when risks to the critical path are identified. However, w DHS has made progress in developing the schedule from the TSA perspective, it has not developed the schedule in concert with pilot participants, as we are recommending. As DHS notes, the voluntary natur of the pilot has allowed participants to proceed at their own pace, based on their own local priorities and procedures, making it difficult to de and maintain accurate activity durations for management purposes. However, based on our review of the TWIC reader pilot schedule, DHS has not accounted for each participant’s pace, local priorities and procedures. Instead, DHS, through TSA, identified the activities it deemed to be key for nt completing the pilot without fully understanding what each participa needs to do to accomplish the key tasks and how long it will take to complete those activities given available resources and local processes. Working individually with its pilot participants to account for program complexities should help ensure that the overall TWIC pilot schedule is informed by each participant, and that key elements—such as the critic path—identified in the schedule developed by TSA are more accurate. Moreover, as noted in our report, the TWIC pilot schedule will not contai the level of information needed for DHS to make effective management decisions despite its efforts to improve its application of scheduling practices. Therefore, additional corrective steps by DHS and TSA a needed to help ensure that the program schedule can be used as a management tool completion date. to guide the pilot and accurately identify the pilot’s DHS also concurred with our third and fourth recommendations, that the ation TWIC program offices at TSA and Coast Guard (1) develop an evalu plan to guide the remainder of the pilot that includes performance standards, a clearly articulated evaluation methodology—including the unit of analysis and criteria—and a data analysis plan; and (2) identify ho w the agencies will compensate for areas where the TWIC reader pilot will not provide the necessary information needed to report to Congress and implement the card reader rule. We recommended that the information be collected and approach for obtaining this additional information be documented as part of the evaluation plan. Developing an evaluation pl for a pilot is a prospective endeavor to help guide the identification of needed data and data sources and methods for comparing the data and obtaining the information needed. However, it is not clear from DHS’s comments whether their proposed actions will fully address these two recommendations. As our report indicates, while TSA developed a test an evaluation master plan for the TWIC pilot, the document did not identi the business and operational data to be collected during the pilot, the performance standards for assessing the data, or the methodology forevaluating the data. To meet the intent of our recommendations, this information would need to be included in the evaluation plan prior t proceeding with the pilot to ensure that the needed data points are planned for and collected during the pilot in order to inform the mandated report to Congress on the results of the pilot. However, DHS’s comment do not indicate that it will take these steps to help inform the report to Congress or the rulemaking process for the TWIC reader rule. Instead, in its response, DHS identifies guidance that it plans to use to supplement th data gathered from the pilot. While identifying the guidance is a positive step, the guidance is not a substitute for a well-developed evaluation pla that defines the information to be collected and approach for obtaining and analyzing the pilot information. Furthermore, the guidance ca compensate for areas where the TWIC pilot does not provide the necessary information. The plan would help DHS ensure that the pilot serves the purpose Congress intended—collecting the data needed to adequately assess the TWIC program during the pilot. In its comments to our draft report, DHS, on behalf of TSA, also commented on the October 21, 2008, power outage at the facility that ho TWIC systems. This outage affected TSA’s ability to reset the PINs (i.e., provide users with new PINs) on 410,000 TWIC cards issued prior to th power failure. As part of the regulation that is currently being written, MTSA-regulated facilities and vessels may require TWIC users to use th PIN to unlock information on a TWIC card, such as the TWIC holder’s picture, to verify the identity of a TWIC holder. Consequently, TSA will have to replace the cards for cardholders who forget their PINs instead of resetting these PINs. In its response, however, TSA questioned whether itwould cost the government and industry up to $26 million to replace the 410,000 TWIC cards potentially affected by the outage. DHS commente d that in the 11 months since the incident, only 1,246 cards have needed replacement and TSA officials believe it highly unlikely that all 410,00 affected transportation workers will need their cards to be replaced. In addition, DHS provided techn into the report as appropriate. ical comments, which we incorporated As agreed with your offices, unless you publicly announce the contents o this report earlier, we plan no further distribution until 30 days from the date of this letter. We will then send copies of this report to the Secretary of Homeland Security, the Assistant Secretary for the Transportation Security Administration, the Commandant of the United States Coast Guard, and appropriate congressional committees. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov/. If you or your staff have any questions about this report, please contac at (202) 512-4379 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last pag of this report. Key contributors to this report are listed in appendix VII. This review examined the Transportation Security Administration’s (TSA) and Coast Guard’s overall progress in implementing the Transportation Worker Identification Credential (TWIC) program. We addressed the following questions: (1) To what extent did TSA, the Coast Guard, and the maritime industry take steps to meet the TWIC compliance date and address related challenges? and (2) What management challenges, if any, do TSA, Coast Guard, and the Department of Homeland Security (DHS) face in executing the TWIC pilot test for informing Congress and the card reader rule? To identify the steps taken by TSA, the Coast Guard, and the maritime industry to meet the April 15, 2009, TWIC compliance date, and address related challenges, we reviewed program documentation on the status of TWIC enrollment and activation as well as implementation efforts from both TSA and the Coast Guard. Among others, this documentation includes compliance reports compiled by the Coast Guard from facility- gathered information, TSA’s TWIC communication plan for disseminating information about the TWIC enrollment process and compliance deadlines, and program management reviews on TWIC enrollment, activation, and issuance. We also interviewed U.S. Citizenship and Immigration Services officials regarding their participation in the TWIC card production and personalization process. In addition, we visited and observed the enrollment process with TSA and TSA contractor representatives at four TWIC enrollment and activation centers. Further, we reviewed TWIC user population estimates and discussed their data reliability with TSA and Coast Guard officials as well as efforts taken to update the population estimates and plan for TWIC enrollment and activation activities and resources. We analyzed pertinent information including key statutes such as the Maritime Transportation Security Act of 2002 (MTSA), as amended by the Security and Accountability For Every (SAFE) Port Act of 2006, and related regulations, policies, and guidance setting out requirements for the TWIC program. We also obtained information from maritime industry stakeholders—such as TWIC Stakeholder Communication Committee—a 15-member advisory council to TSA, Coast Guard, and their contractor to promote real-time communications flow between industry, government, and the TWIC contracting team; reviewed reports by the National Maritime Security Advisory Committee—an advisory council to DHS; met with nine associations whose members are impacted by the implementation of TWIC, such as the American Association of Port Authorities—a trade association that represents more than 160 public port organizations throughout the Western Hemisphere; The Independent Liquid Terminals Association—a trade association representing companies with bulk liquid terminals and above ground storage tank facilities (“tank farms”) that interconnect with and provide services to various modes of bulk liquid carriers, such as oceangoing tank ships, tank barges, tank trucks, tank rail cars, and pipelines; and The Association of American Railroads—whose members represent a 140,000-mile rail network, including the major freight railroads in the United States, Canada, and Mexico, as well as Amtrak. We also visited four TWIC enrollment and activation centers, and visited and/or met with officials of facilities and vessels impacted by TWIC across the country such as the ports of Los Angeles and Long Beach, California; Brownsville, Texas; Baltimore, Maryland; and Houston, Texas; as well as the Port Authority of New York and New Jersey. In addition, we met with officials representing vessel operations at the Staten Island Ferry in Staten Island, New York; Magnolia Marine Transports in Vicksburg, Mississippi; Watermark Cruises in Annapolis, Maryland; and World Cruise Terminal in San Pedro, California. At each location, we interviewed officials of facilities and vessels responsible for implementing the use of TWIC. While information we obtained from these interviews and site visits may not be generalized across the maritime transportation industry as a whole, because the facilities, vessels, and enrollment centers we selected are representative of high and low volume entities in the maritime industry and the enrollment centers are representative of areas with high population density, the locations we visited provided us with an overview of the general progress of the TWIC program, as well as any potential implementation challenges faced by MTSA-regulated facilities/vessels, transportation workers, and mariners. Further, we interviewed TWIC program officials from TSA and the Coast Guard—including the TWIC Program Director at TSA and the Coast Guard Commander responsible for the TWIC compliance program—regarding their efforts to implement the TWIC program. We also interviewed a number of Coast Guard officials at ports across the country regarding local TWIC implementation and compliance efforts to better understand the processes and procedures in place for enforcing compliance with TWIC. Specifically, we interviewed Coast Guard officials with responsibilities in New York and New Jersey; Los Angeles and Long Beach, California; Corpus Christi, Texas; and Baltimore, Maryland. We met with these Coast Guard officials because the facilities, vessels, and enrollment centers we visited are housed in these officials’ area(s) of responsibility. To assess the extent to which TSA planned for the potential failure of information technology systems supporting the TWIC program in order to minimize the effects of potential TWIC system failures, we reviewed TWIC program management reviews and conducted interviews with TWIC program staff. We compared TSA’s efforts with Office of Management and Budget (OMB) and National Institute of Standards and Technology (NIST) guidance, government internal control standards. To identify and assess the management challenges TSA, the Coast Guard, and DHS face in executing the TWIC pilot test for informing Congress and the card reader rule, we reviewed prior GAO reports and testimonies on the TWIC program issued from December 2004 through September 2008, and key documents related to the TWIC reader pilot. These documents included the Broad Agency Announcement-Initial Capability Evaluation, TWIC Pilot Test and Evaluation Master Plan, the Initial Technical Test Plan, the Early Operational Assessment Test Plan, the Concept of Operations Plan, TWIC pilot scenarios, the TSA Pilot Schedule, and the Advanced Notice of Proposal Rulemaking on TWIC Reader Requirements. We also collected and analyzed Port Security Grant Program and the Transit Security Grant Program awards relative to the TWIC pilot participants to inform our understanding of the TWIC pilot funding structure and guidance provided to TWIC pilot participants. In addition, we reviewed relevant legislation, such as the MTSA and amendments to MTSA made by the SAFE Port Act of 2006 to inform our review of requirements for TWIC and the TWIC pilot specifically. We also obtained an in person understanding of the benefits of and barriers to implementing the pilot by conducting site visits to or interviews with officials at the 7 pilot sites. Specifically, we visited pilot participants at the Ports of Los Angeles, Long Beach, and Brownsville, and the Port Authority of New York and New Jersey. We also interviewed and or met with officials at vessel operations participating in the TWIC pilot, including the Staten Island Ferry in Staten Island, New York; Magnolia Marine Transports in Vicksburg, Mississippi; and Watermark Cruises in Annapolis, Maryland. To assess the viability of the TWIC pilot and better understand stakeholder contributions within DHS, we met with officials from several components at DHS. Specifically, we met with officials at DHS’s Office of Screening Coordination, Science and Technology Directorate, the Coast Guard, the Federal Emergency Management Agency, and the Transportation Security Agency. To further enhance our understanding of the TWIC pilot approach, we also interviewed officials at NIST and the Department of Defense’s Naval Air Systems Command and Space and Naval Warfare Systems Command—organizations supporting TSA in the TWIC pilot—to discuss TWIC pilot testing approaches. We also observed testing of TWIC readers against environmental conditions at the Naval Warfare laboratory. In addition, we met with local Coast Guard officials and representatives from 15 stakeholder organizations, including associations and business owners from industries impacted by TWIC, such as longshoremen and truck drivers. While information we obtained from the interviews with stakeholders may not be generalized across the maritime transportation industry as a whole, because we selected stakeholders who either represent national associations or who operate in or access the ports where the TWIC reader pilot will be conducted, the interviews provided us with information on the views of individuals and organizations that will be directly impacted by the program. In assessing the TWIC pilot approach, we reviewed the information obtained through these endeavors against practices we identified in program and project management as well as program evaluation efforts that are relevant to the TWIC program pilot. These practices were identified based on a review of (1) guidance issued by OMB; (2) our prior work on results oriented government, program management and evaluation, and regulatory analysis; and (3) literature on program management principles. Based on these recognized standards, practices, and guidance, we Assessed the pilot schedule against nine relevant best practices in our Cost Estimating and Assessment Guide to determine the extent to which the pilot schedule reflects key estimating practices that are fundamental to having and maintaining a reliable schedule. In doing so, we independently assessed the program’s integrated master schedule and its underlying activities against our nine best practices. We also interviewed cognizant program officials to discuss their use of best practices in creating the program’s current schedule and we attended three walk-throughs to better understand how the schedule was constructed and maintained. To further assess the reliability of the schedule, we compared information in the pilot schedule to information provided by pilot participants and stakeholders. Reviewed TWIC pilot documentation against identified characteristics that sound evaluation plans and approaches include. We also assessed the data to be collected from the TWIC pilot and identified methodologies for using the data to inform Congress on the impacts of using TWIC with biometric card readers and further informing the card reader rule. To help assess the completeness of the TWIC pilot approach and evaluation methodology, we compared the technology, business, and operational potential requirements identified in the TWIC Reader Advanced Notice of Proposed Rulemaking (ANPRM) issued on March 27, 2009. As part of this assessment we reviewed the program evaluation approach used by TSA and the Coast Guard for leveraging pilot efforts and investments to the maximum extent possible for identifying the cost and other implications on government, the private sector, and the public at large to be considered when developing the regulatory analysis. We conducted this performance audit from July 2008 through November 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Table 3 below summarizes key Transportation Worker Identification Credential (TWIC) program laws and milestones for implementing the program through April 2009. Appendix III: Phased-In Captain of the Port Zone Compliance Schedule (Revised February 19, 2009) Table 4 below illustrates the phased-in captain of the port zone compliance schedule from October 2008 to April 2009. Table 5 presents a summary of best practices identified by GAO for applying a schedule as part of program management. The analysis below is a detailed review of key statements made in the Transportation Worker Identification Credential (TWIC) Reader Advanced Notice of Proposed Rulemaking (ANPRM) issued by the Coast Guard compared to the items being tested in the TWIC reader pilot. The ANPRM contains the potential TWIC reader requirements Coast Guard is considering as part of a future regulation for MTSA-regulated facilities and vessels required to use TWIC as an access control mechanism. The Coast Guard notes that the ANPRM presents preliminary thoughts on potential requirements for electronic TWIC readers in order to open the public dialogue on implementing TWIC reader requirements. The requirements presented in this ANPRM, represent the technology, business processes, and operational characteristics of TWIC under consideration at the time. Moreover, they represent potential costs and benefits—or impacts—to be borne by the government, private sector, and population at large as a result of the regulation being considered. The TWIC reader pilot, as defined in the SAFE Port Act of 2006, is to test the business processes, technology, and operational impacts required to deploy transportation security card readers at secure areas of the marine transportation system. Furthermore, The Department of Homeland Security (DHS) is to report on the following results from the TWIC reader pilot: (1) the findings of the pilot program with respect to technical and operational impacts of implementing a transportation security card reader system; (2) any actions that may be necessary to ensure that all vessels and facilities to which this section applies are able to comply with such regulations; and (3) an analysis of the viability of equipment under the extreme weather conditions of the marine environment. The following defines the assessment categories used below. 1. Yes—This assessment category represents that the potential requirement identified in the ANPRM is being tested for in the TWIC reader pilot. 2. Partially—This assessment category represents that the potential requirement identified in the ANPRM is at least in part being tested for in the TWIC reader pilot. 3. No—This assessment category represents that the potential requirement identified in the ANPRM is not being tested for in the TWIC reader pilot. In addition to the contact named above, David Bruno (Assistant Director), Joseph P. Cruz (analyst-in-charge), Chuck Bausell, Tim Boatwright, Geoffrey Hamilton, Richard Hung, Lemuel Jackson, Daniel Kaneshiro, Stan Kostyla, Jason Lee, Linda Miller, Karen Richey, Julie E. Silvers, and Sally Williamson made key contributions to this report.
The Transportation Worker Identification Credential (TWIC) program, which is managed by the Department of Homeland Security's (DHS) Transportation Security Administration (TSA) and the U.S. Coast Guard, requires maritime workers who access secure areas of transportation facilities to obtain a biometric identification card to access these facilities. A federal regulation set a national compliance deadline of April 15, 2009. TSA is conducting a pilot program to test the use of TWICs with biometric card readers in part to inform the development of a second TWIC regulation. The Government Accountability Office (GAO) was asked to evaluate TSA's and the Coast Guard's progress and related challenges in implementing TWIC, and to evaluate the management challenges, if any, TSA, Coast Guard, and DHS face in executing the TWIC pilot test. GAO reviewed TWIC enrollment and implementation documents and conducted site visits or interviewed officials at the seven pilot program sites. TSA, Coast Guard, and the maritime industry took a number of steps to enroll 1,121,461 workers in the TWIC program, or over 93 percent of the estimated 1.2 million users, by the April 15, 2009, national compliance deadline, but experienced challenges that resulted in delays. TSA and the Coast Guard implemented a staggered compliance approach whereby each of 42 regions impacted by TWIC were required to meet TWIC compliance prior to the national compliance date. Further, based on lessons learned from its early experiences with enrollment and activation, and to prepare for an expected surge in TWIC enrollments and activations as compliance dates approached, TSA and its contractor increased the number of stations available for TWIC enrollment and activation. While 93 percent of users were enrolled in TWIC by the compliance date, TSA data shows that some workers experienced delays in receiving TWICs. Among reasons for the delays, a power failure in October 2008 occurred at the government facility that processes TWIC data. The power failure resulted in credential activations being halted until late November 2008, and the inability to set new personal identification numbers (PIN) on 410,000 TWICs issued prior to the power failure. While TSA officials stated that they are taking steps to develop a disaster recovery plan by next year and a system to support disaster recovery by 2012, until such a plan and system(s) are put in place, TWIC systems remain vulnerable to similar disasters. While the full cost of this power failure is unknown, based on TSA provided figures, it could cost the government and industry up to approximately $26 million to replace all affected TWIC cards. While TSA has made progress in incorporating management best practices to execute the TWIC pilot, TSA faces two management challenges in ensuring the successful execution of the pilot test aimed at informing Congress and the development of the second TWIC regulation. First, TSA has faced challenges in using the TWIC pilot schedule to guide the pilot and accurately identify the pilot's completion date. TSA has improved its scheduling practices in executing the pilot, but weaknesses remain, such as not capturing all pilot activities in the schedule, that may adversely impact the schedule's usefulness as a management tool and for communicating with pilot participants in the maritime industry. Second, shortfalls in TWIC pilot planning have hindered TSA and Coast Guard's efforts to ensure that the pilot is broadly representative of deployment conditions and will yield the information needed--such as information on the operational impacts of deploying biometric card readers and their costs--to accurately inform Congress and the second rule. This is in part because these agencies have not developed an evaluation plan that fully identifies the scope of the pilot and specifies how the information from the pilot will be analyzed. The current evaluation plans describe data collection methods but do not identify the evaluation criteria and methodology to be used in analyzing the pilot data once collected. A well-developed, sound evaluation plan would help TSA and the Coast Guard determine how the data are to be analyzed to measure the project's performance.
Our objectives were to (1) identify those Defense long-haul telecommunications networks operating outside of the common-user DISN, (2) evaluate the Department of Defense’s progress in implementing its policies for managing telecommunications services, which include: developing a comprehensive inventory of telecommunications equipment and services, reporting on telecommunications services acquired, trends, and costs, mandating the use of common-user networks, and developing a waiver process to grant exceptions from using common-user networks, and (3) evaluate Defense’s progress in developing performance measures for DISN to ensure effective and efficient use of the department’s telecommunications resources. To determine what long-haul telecommunications networks were planned or operating in Defense, we reviewed applicable Defense directives, instructions, and memorandums regarding the use of common-user networks. We met with officials from DISA and OASD/C3I to assess Defense’s progress in developing a comprehensive inventory of telecommunications equipment and services. We met with representatives of the Joint Staff for Command, Control, Communication and Computers (J-6); the Department of Defense’s Office of Inspector General; the Army, the Navy, the Marines, the Air Force, the Defense Logistics Agency, and the Defense Commissary Agency to assess component efforts to develop inventories. When we learned that no comprehensive inventories of networks exist at the department or component level, we sent a questionnaire to the four military services requesting that, for every non-DISN long-haul network, they report: the name of the network; functional description; types of telecommunications services supported; estimated annual costs; whether the network was planned or operational, and if planned, its status, life-cycle costs, and whether it was scheduled to be replaced by DISN, and when. We did not independently verify the information provided by the Services. However, we consulted with them to confirm our understanding of their responses and to discuss and ask questions we had about information they provided. Appendix I details the results of our survey. To assess progress in reporting on telecommunications services acquired, trends, and costs, we reviewed applicable Defense directives, instructions, and memorandums and discussed Defense’s implementation of these requirements with officials from ASD/C3I and DISA. We analyzed information on costs maintained by DISA and reviewed a recent contractor evaluation of DISA business processes. To assess Defense’s progress in enforcing its policy mandate that Defense components acquire services from common-user networks, we reviewed applicable Defense directives, instructions, and memorandums and met with officials from ASD/C3I, DISA, and the Defense components. During these interviews we asked for documentation showing that existing policies on telecommunications management and the use of common-user networks were being implemented and enforced. We obtained and analyzed network plans, requirements, and other acquisition documentation to determine if Defense components were complying with telecommunications management policies. To assess Defense’s progress in developing a waiver process to grant exceptions from using common-user networks, we reviewed applicable Defense directives, instructions, and memorandums. We met with officials from ASD/C3I and DISA to discuss their plans to implement an interim waiver process and to develop a strategy detailing how and when independent networks will be replaced by their common-user counterparts. Because the interim process began during our review, we met again with DISA officials in April 1998 to assess the agency’s progress to date in granting waivers. To assess Defense’s progress in developing performance measures for DISN, we met with officials from DISA and reviewed DISA’s draft documentation on the issue, which consisted of draft performance measures for information technology acquisitions. We reviewed the Clinger-Cohen Act of 1996, the Federal Acquisition Streamlining Act of 1994, the Chief Financial Officers Act of 1990, the Government Performance and Results Act of 1993, and the Paperwork Reduction Act of 1995 to determine applicable legislative requirements for developing performance measures. We relied on work we performed in developing our recent guide on performance measurement, Executive Guide: Measuring Performance and Demonstrating Results of Information Technology Investments (GAO/AIMD-98-89, March 1998). In addition, we examined network performance measurements used in the private sector. Our review was conducted from December 1996 through April 1998 in accordance with generally accepted government auditing standards. We obtained written comments from Defense on a draft of this report. These comments are discussed in the “Agency Comments and Our Evaluation” section of this letter and are reprinted in appendix II. The military services, Defense agencies, and other Defense components have traditionally acquired and operated many unique telecommunications networks to support a range of mission requirements. As a result, Defense components operate many stovepiped telecommunications systems that are not interoperable and cannot share information across functional and organizational boundaries. For example, between 1988 and 1992 Defense reported several interoperability problems including some arising during the Persian Gulf War. Defense components were unable to use their telecommunications networks and information systems to coordinate the issuance of air tasking orders, the use of air space, and the use of fire support for joint operations. To improve the interoperability of its military communications services as well as to reduce costs associated with operating redundant systems, Defense began in 1991 to plan and implement DISN to serve as the department’s primary worldwide telecommunications and information transfer network. The DISN strategy focuses on replacing older data communications systems, using emerging technologies and cost-effective acquisition strategies that provide secure and interoperable voice, data, video, and imagery communications services. Under the DISN program, the military services and Defense agencies are still responsible for acquiring telecommunications services for their local bases and installations as well as deployed communications networks. DISA is responsible for acquiring the long-haul services that will interconnect these base-level and deployed networks within and between the continental United States, Europe, and the Pacific. Defense issued a number of policies and directives in 1991 aimed at ensuring that the department could identify and replace redundant networks with DISN and manage DISN efficiently and effectively. These policies directed components to develop comprehensive inventories of their telecommunications equipment and services, and DISA to develop a comprehensive Defense-wide inventory; directed DISA to report annually on telecommunications equipment acquisitions, trends, and associated costs; mandated the use of common-user networks; and directed DISA to develop a waiver process to grant exceptions from using common-user networks when these networks could not satisfy Defense components’ requirements. In a previous review of the DISN program, we found that Defense was not doing enough to ensure that the program would be managed efficiently and effectively. Specifically, the department lacked performance measures that would help Defense track whether DISA was meeting its objectives, efficiently allocating resources, and learning from mistakes. In response, Defense agreed to establish measures for the program. In order for the DISN program to work, Defense needs to know how many networks are operating in the department and what functions they support. This is the foundation for identifying redundant and stovepiped networks and ensuring that they are replaced by DISN. However, Defense lacks the basic information necessary to determine how many networks are operating in the department, what functions they support, or what they cost. In order to estimate the number and cost of networks that are operating outside of DISN, we conducted our own survey, which identified 87 such networks operated by the military services alone. DISA initiated a similar data call to the military services and Defense agencies after we began our survey and identified 153 networks planned or operating throughout Defense. The results of our survey are presented in appendix I and summarized in table 1. To manage telecommunications cost effectively, Defense must know what networks are operating in the department. In 1991, Defense directed DISA to establish a central inventory of all long-haul telecommunications equipment and services in Defense, and directed the heads of Defense components to do likewise. However, the central inventory was never established and DISA staff are still discovering new networks as they process new telecommunications service requests from Defense components. Defense components have also failed to develop inventories of their own networks. During our initial meetings, Army, Navy, and Air Force officials stated that they could not readily identify all of their networks or describe what their functions are because they do not centrally manage their telecommunications resources. Our experience with the Navy illustrates the depth of this problem. The Navy’s initial response to our survey only identified three independent long-haul networks. Other Navy networks known to exist, such as the Naval Aviation Systems Team Wide Area Network (NAVWAN), were not reported in the survey. Navy’s headquarters telecommunication staff acknowledged that they were unable to identify all of the Navy’s long-haul networks. Careful analysis is needed to determine whether any of the independent networks identified in our survey can or should be replaced by DISN common user services. However, on the basis of our interviews with the military services and our survey results, we were able to determine that overlaps exist between telecommunications services offered by independent networks and services offered by DISN. For example: NAVWAN offers its users data communications services using Internet Protocol (IP); similar services are provided by DISA on DISN’s Unclassified but Sensitive (N-Level) IP Router Network (NIPRNET). The Army’s Installation Transition Processing (ITP) Network also offers IP router services similar to those provided by DISN’s NIPRNET. The Navy Sea Systems Command’s Enterprisewide Network (NEWNET, now known as Smart Link) relies on asynchronous transfer mode-based data communications services; similar services are now offered by DISA on a limited basis. The Army’s planned Regional Transition Network (ARTNET, now known as the Circuit Bundling Initiative) also relies on asynchronous transfer mode-based data services, similar to services offered by DISA. To ensure that a common-user network is efficiently and effectively managed, it is essential to closely monitor its acquisitions of telecommunications services, costs, and trends in usage, that is, the volumes and types of traffic it carries. This monitoring helps an agency ensure that the network is properly sized (i.e., neither oversized nor undersized) and offers cost-effective services. Since 1991, DISA has been required to report annually on telecommunications services acquired, trends (volumes and types of traffic), and associated costs throughout Defense. However, it has not done so, and it lacks the data needed to begin developing such reports. For example, as noted previously, DISA lacks a comprehensive inventory of telecommunications equipment and services across the department. Therefore, it cannot effectively report annually on acquisitions. In addition, DISA has not collected data that would help it identify trends in network traffic throughout Defense, which in turn would help it plan for future growth and identify the need for new telecommunications services. This would include data on the number of anticipated users, the nature of business functions requiring telecommunications support, and the potential costs and benefits of new technologies. Further, Defense managers lack reliable cost information on their networks. For example, senior Defense managers rely on Defense components to voluntarily report telecommunications resource requirements during annual budget preparations. But because communications resources are embedded in noncommunications budget items, this process does not allow Defense to identify costs by network or to identify costs for services obtained by users outside of DISA channels. In addition, DISA does not have a cost accounting system or any other effective means of determining DISN’s actual operating costs. Until Defense managers have good data on status and trends in telecommunications equipment and services, acquisitions, and costs, they will not have a sound basis for making decisions on reducing telecommunications costs across the department, improving network operations, and reliably determining how efficiently and cost effectively to meet user needs. Under Title 10 of the United States Code, the military services have wide latitude to expend resources to train and sustain their forces. Because the mandate to use DISN restricts this latitude, compliance will only be achieved if Defense institutes an effective enforcement process. Since it began the DISN program in 1991, Defense has never effectively enforced the use of common-user networks. While OASD/C3I staff stated that financial pressure could be brought to bear in the budget process to enforce the mandate, they were unable to articulate how this enforcement would occur. Further, even though the military services have implemented several major long-haul networks during the past 5 years, OASD/C3I staff were unable to identify a single instance in which they formally analyzed the military services’ plans for acquiring long-haul networks and insisted that common-user networks be used instead. In May 1997, ASD/C3I issued a memorandum that reiterated Defense policy mandating the use of common-user networks for long-haul telecommunications and reaffirming DISA’s role as the manager and sole provider of long-haul telecommunications. Defense is now preparing an update to this memorandum that it states will reflect the department’s changing organization and mission, and changes in telecommunications technology. However, unless Defense defines and implements a process to enforce this policy, it will remain ineffective. In August 1997, DISA began implementing an interim waiver process which outlined the steps that Defense components must follow to operate independent networks: First, operators of all independent long-haul networks must, as of August 1997, request a waiver to policy mandating common-user networks. Second, DISA must assess the request and issue a waiver in those cases where telecommunications requirements cannot currently be technically or economically satisfied by DISN or another common-user system such as FTS 2000/2001. Neither of these steps, however, is well-defined. For example, the guidance does not describe data that the required justifications should include or criteria DISA will use in evaluating them. In addition, it does not specify how DISA will determine if components’ requirements can be cost effectively satisfied by DISN or FTS 2000/2001. To date, the Services and Defense agencies have largely ignored the interim waiver process. Only 9 percent of the operators of the 131 non-DISA-managed independent networks that DISA identified in its survey has requested a waiver from use of DISN services. Performance measures are central to effectively managing any significant information system undertaking and are required by several federal statutes, including the Federal Acquisition Streamlining Act (FASA) of 1994 and the Clinger-Cohen Act of 1996. For example, under FASA, the Secretary of Defense is required to establish and approve the cost, performance, and schedule goals for major defense acquisition programs and for each phase of the acquisition cycle. Under Clinger-Cohen, agencies must define mission-related performance measures before making information technology investments, and must determine actual mission-related benefits achieved from this information technology, to help ensure an adequate return on investment. For the DISN program, appropriate performance measures would be those that facilitate comparisons between DISN and the independent networks, as well as those that identify potential problems (for example, network reliability, network availability, and measures of customer service, including responsiveness to customer requests for maintenance or for new services). In our 1996 report on the DISN program, we recommended that Defense establish performance measures for DISN. Although it agreed to develop performance measures in response to that review, Defense has never developed measures for the DISN program. Until it does so, Defense will not be able to demonstrate to the Services and other components that DISN is a better choice than their various independent networks, nor will it be able to target and direct management attention to problem areas. In the 7 years that it has been implementing the DISN program and striving to improve telecommunications management in the department, Defense has done very little to implement the basic management controls it believed were needed to ensure success. Numerous independent networks continue to exist without DISA’s knowledge; Defense does not have a comprehensive inventory of telecommunications equipment and services; DISA does not collect data and report on acquisitions, trends, and costs; Defense does not enforce the use of common-user networks; Defense has not implemented an effective waiver process that includes the objective evaluation of alternative telecommunications solutions; and Defense has not established good performance measures. As a result, Defense has not achieved its goals for an interoperable telecommunications environment, cannot support any claims that the long-haul networks it operates are cost-effective, and cannot determine which independent long-haul networks should be replaced by common user networks such as DISN or FTS 2000/2001. We recommend that the Secretary of Defense direct the Assistant Secretary of Defense for Command, Control, Communications, and Intelligence to ensure that existing policies are clearly defined, documented, and enforced. Specifically, ASD/C3I should develop and maintain a comprehensive inventory of Defense’s telecommunications equipment and services; track acquisitions of telecommunications services throughout Defense, the actual costs of those services, and trends in usage (that is, the volumes and types of traffic that networks carry); define and institute an effective process for evaluating the cost-effectiveness of Defense networks and mandating the use of common-user networks for long-haul telecommunications where appropriate. As part of this process, define the criteria that DISA will use to make waiver determinations, including how DISA will measure technical, economic, and customer service factors in granting waivers. In addition, we recommend that the Secretary direct the Assistant Secretary of Defense for Command, Control, Communications, and Intelligence to develop and adopt user-based provisioning, pricing, and performance metrics as minimum performance measures for DISN. The Senior Civilian Official for the Office of the Assistant Secretary of Defense for Command, Control, Communications, and Intelligence (ASD/C3I) provided written comments on a draft of this report. Defense concurred with all of our recommendations. However, Defense expressed concern that the body of the draft report may lead the reader to believe that Defense has done nothing to implement or enforce its own long-haul telecommunications policies. In its response, the department notes that it has: (1) established the Defense Information Systems Database (DISD) as a comprehensive inventory of long-haul telecommunications networks throughout Defense, (2) clarified existing policy by issuing an ASD/C3I memorandum dated May 5, 1997, that reaffirms DISA’s role as the sole manager and provider of long-haul telecommunications systems and services, (3) developed a process for determining how individual telecommunications requirements can best be satisfied, (4) developed a process for granting temporary waivers, and (5) begun the process of establishing performance metrics for DISN. We incorporated additional information in the report to more clearly reflect actions DISA has initiated. However, while these plans are a necessary first step, they must be effectively implemented to bring about real improvements in telecommunications management, which is the focus of the body of our report. Defense recognizes this in its discussion and expresses its commitment to effectively implementing our recommendations. Defense’s comments are presented in appendix II. Detailed GAO responses follow in the same appendix. We will send copies of this report to the Chairman of your Committee; the Chairmen and Ranking Minority Members of the House Committee on Government Reform and Oversight, the House and Senate Appropriations Committees, the House National Security Committee, the Senate Armed Services Committee, and other interested congressional committees; the Secretary of Defense; and the Director of the Office of Management and Budget. Copies will be made available to others upon request. Please contact me at (202) 512-6240 if you or your staff have any questions. Major contributors to this report are listed in appendix III. Naval Education & Training Management Systems Network (NETMSN) NAVSEA Enterprise Wide Area Network (NEWNET/Smart Link) Puget Sound Metropolitan Area Network (MAN) Tidewater Metropolitan Area Network (MAN) Naval Facilities Engineering Command Wide Area Network (NAVFAC WAN) Pensacola Metropolitan Area Network (MAN) Corpus Christi Metropolitan Area Network (MAN) NCTAMS LANT Det. Video Teleconferencing NCTAMS LANT Det. Advanced Digital Multiplexer System (ADMS) NCTAMS LANT Det. U.S. Atlantic Command Net (USACONNET) NCTAMS LANT Det. Navy C2 System (NCCS) Guam Unclassified Metropolitan Area Network (MAN) Guam Administrative Telephone Switching System Planned — San Diego Metropolitan Area Network (MAN) Information on these networks came from DISA’s survey which does not include cost data. The Marine Corps did not provide this information or provided insufficient information to determine costs by fiscal year. The Air Force did not provide this information or provided insufficient information to determine costs by fiscal year. The following are GAO’s comments on the Department of Defense letter dated July 16, 1998. 1. We acknowledge in this report that ASD/C3I has clarified existing long-haul telecommunications policy by issuing a May 5, 1997, memorandum. We have added information regarding Defense’s update of 1991 policy that will reflect changes in technology, organization, and mission. Nevertheless, Defense’s actions remain preliminary, and unless that policy is properly implemented and enforced it will remain ineffective. 2. As indicated in the reply, Defense does not maintain a comprehensive inventory of independent long-haul telecommunications networks, and therefore does not know how many networks are operating throughout the department or what functions they support. As Defense notes in its comments, additional guidance and procedures are needed to ensure that all requirements for long-haul telecommunications equipment and services are identified and placed in the Defense Information Systems Database. 3. Defense affirms in its comment what we state in this report, that DISA currently lacks well-defined steps for determining whether a long-haul telecommunications requirement can be most effectively satisfied by a common-user network. We note Defense’s plan to develop and employ a standard requirements evaluation model. This model, if properly developed and implemented, could assist Defense in making cost-effective decisions on individual telecommunications requirements. However, the model may not be effective without the cooperation of Defense components, which may choose not to submit their requirements through DISA. The model may also not be effective if other steps mentioned in this report, such as adequate data gathering on telecommunications trends and costs, and use of performance measures, are not taken. 4. Two years ago we highlighted the need for DISN performance measures in a report on the DISN program (GAO/AIMD-97-9, November 27, 1996). We recognize that Defense now intends to take action on our recommendation that it implement user-based performance measures for DISN, and we agree that such metrics should be applied to all long-haul telecommunications. We are unable to make further comment, however, until Defense takes concrete steps to implement these performance measures. Franklin W. Deffer, Assistant Director Kevin E. Conway, Assistant Director Mary T. Marshall, Senior Information Systems Analyst Cristina T. Chaplain, Communications Analyst The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Department of Defense's (DOD) efforts to implement the Defense Information Systems Network (DISN), focusing on: (1) those DOD long-haul telecommunications networks operating outside of the common-user DISN; (2) DOD's progress in implementing its policies for managing DISN; and (3) DOD's progress in developing performance measures for DISN, which DOD agreed to do in response to GAO's previous review of the DISN program. GAO noted that: (1) although DOD has been implementing the DISN program for 7 years, numerous networks continue to exist without the Defense Information Systems Agency's (DISA) knowledge; (2) GAO's survey found that the military services are operating at least 87 independent networks that support a variety of long-haul telecommunications requirements; (3) the services reported costs on 68 of these networks totalling more than $89 million annually; (4) DOD's inability to restrict the number of networks operating across the department stems from its failure to implement basic telecommunications management policies established at the beginning of the DISN program and its failure to develop objective performance measures for the program; (5) DISA has not developed a comprehensive inventory of telecommunications networks throughout DOD nor have the military services developed inventories of their own networks; (6) DISA has not reported on telecommunications acquisitions, trends (volumes and types of traffic) and costs throughout DOD, and it lacks the data to develop such reports; (7) DOD has not effectively enforced the use of common-user services, nor were Assistant Secretary of Defense for Command, Control, Communications, and Intelligence (ASD/C3I) officials clear on how enforcement would occur; (8) DOD has only recently begun to implement an interim waiver process to exempt DOD components from using common-user networks--a final process has yet to be implemented; (9) DOD has not developed performance measures for the DISN program even though it agreed with GAO's previous report that these measures were essential to ensuring DISN was efficiently and effectively managed; (10) by not implementing the above, DOD lacks the basic management controls to ensure that it can achieve its goal for an interoperable and cost-effective telecommunications environment; (11) specifically, it lacks a foundation for identifying stovepiped and redundant networks that are not interoperable and cannot share information, and replacing them with mandated common-user services; it lacks a basis for maximizing the efficiency and cost-effectiveness of DISN; it cannot quantify problems; and it cannot learn from mistakes; and (12) as a result, DOD's stated goals for DISN are at risk, and DOD cannot ensure that DISN is the most cost-effective solution to DOD's telecommunications service requirements.
BOP operates three main types of segregated housing units: (1) SHUs, (2) SMUs, and (3) the ADX facility in Florence, Colorado. BOP also operates Communications Management Units (CMU), where conditions of confinement are similar to general population and inmates are allowed to congregate outside their cells for up to 16 hours per day. For information about CMUs see appendix II. According to BOP policy, all three types of segregated housing units have the same purpose, which is to separate inmates from the general inmate population to protect the safety, security, and orderly operation of BOP facilities, and to protect the public. However, the specific placement criteria and conditions of confinement vary for each type of segregated housing unit. In addition, inmates in SHUs, SMUs, and ADX are confined to their cells approximately 23 hours per day.three units. From fiscal year 2008 through February 2013, the total inmate population in segregated housing units increased approximately 17 percent—from 10,659 to 12,460 inmates. The total inmate population in segregated housing units increased since fiscal year 2008, but the trends in inmate population vary by type of segregated housing unit. By comparison, the total inmate population in BOP facilities increased by about 6 percent since fiscal year 2008. In addition, the total number of segregated housing cells in BOP facilities increased by nearly 16 percent. The main reason for the increase in segregated inmates was the creation of the SMU program in fiscal year 2008. SHUs. From fiscal year 2008 through February 2013, the total SHU population remained about the same at 10,070 and 10,050, respectively. BOP generally double-bunks inmates in SHUs; however, BOP has the capability to hold some SHU inmates in single cells. For example, as of November 2012, BOP had 6,731 double-bunked SHU cells and 360 single-bunked SHU cells. BOP officials also stated they may add beds to some SHU cells to accommodate the population at a given facility. SMUs. As shown in figure 5, from fiscal year 2008 through February 2013, the SMU population increased at a faster rate than SHUs and ADX—from 144 inmates in fiscal year 2008 to 1,960 inmates as of February 2013. housing units in five BOP facilities to 1,270 total SMU cells, as of November 2012. By March 2013, BOP closed SMUs in two facilities and moved those SMU inmates into other SMUs or released them from prison after serving their sentence. BOP developed SMU capacity by converting existing ADX. From fiscal year 2008 through February 2013, the total ADX inmate population declined by approximately 5 percent from 475 inmates to 450 inmates. During this period, ADX cells remained stable at 623 cells. According to BOP officials, the ADX population has declined overall since 2008 because of the transfer of inmates out of ADX Step Down to the general population of another high security prison or because inmates are being placed in SMUs instead of being placed in ADX. (See fig. 5 for the trends in population growth for SHUs, SMUs, and ADX from fiscal year 2008 through February 2013). These data include inmates in the SHUs within each SMU. BOP Headquarters (HQ) has a mechanism in place to centrally monitor how prisons implement most segregated housing unit policies, but the degree of BOP monitoring varies depending on the type of segregated housing unit. In addition, we identified concerns related to facilities’ documentation of monitoring conditions of confinement and procedural protections. BOP monitors the extent to which individual prisons implement BOP policies. BOP’s monitoring includes specific steps to check compliance with requirements for SHUs and SMUs, but not for ADX. BOP’s Program Review Division is to perform reviews at least once every 3 years to ensure compliance with BOP policies. However, BOP can review prisons more frequently if it identifies performance deficiencies. These follow-ups can occur at 6-month, 18-month, 2-year or 3-year intervals. These PRD reviews assess compliance with a variety of BOP policies for inmates in the general population prison and segregated housing. For example, PRD assesses compliance with BOP policies on conditions of confinement, such as whether inmates are given three meals a day, provided exercise time 5 days a week, and are allowed telephone and other privileges. Following a review at a facility, PRD issues a program review report, noting deficiencies and findings at the BOP facility. These PRD monitoring reviews are done on a prison complex basis, which may include a variety of housing types, including low, minimum, medium, high security prisons, and the three types of segregated housing units (e.g., SHUs, SMUs, and ADX). According to BOP officials, BOP provides training for PRD program review staff to conduct on-site monitoring. For example, on-site monitoring generally includes a team of an average of about five examiners, depending on the size and security level of the facility. Before a staff member leads an on-site monitoring visit, he or she is required to shadow an experienced staff member for about 1 year. BOP also trains all employees in basic correctional duties and inmate supervision. For example, BOP requires all new examiners to participate in annual refresher training. program reviews. If PRD determines that the prison response is insufficient, PRD can request that the prison take corrective actions in a subsequent follow-up report. We reviewed 43 PRD follow-up reports and found that PRD concluded that the facilities generally addressed deficiencies identified in all of the 43 reports. For example, one follow-up report was completed within 30 days and identified steps taken by the prison to address each of the four problem areas—administrative operations, operational security, inmate management, and intelligence operations—identified in the PRD report. To address one of the deficiencies related to improper documentation of exercise, meals, and supervisor assignments in SHUs, PRD required additional training for the SHU staff. Following training, the prison determined that it was in compliance with the relevant requirement, deficiencies were addressed and PRD closed the recommendation. As part of PRD’s monitoring process, once the facilities document steps taken to address deficiencies in their follow-up reports, PRD determines whether to close the recommendations. As part of the monitoring process discussed above, PRD also checks compliance with selected SHU- and SMU-specific policies, but has no requirement to monitor ADX-specific policies. According to documentation that BOP provided, we determined that BOP’s monitoring system is designed to assess whether individual BOP prisons are in compliance with SHU and SMU procedural policies, such as why an inmate is placed in segregation, and with the specific conditions of confinement. For example, BOP’s SHU policy requires that prison staff review the inmate’s status within 3 days of being placed in administrative detention. To assess compliance with this SHU policy, BOP monitoring guidance requires PRD staff to review whether the inmate’s status was reviewed within 3 days of being placed in administrative detention as required. In addition, PRD also is to verify that prisons completed their quarterly audits and operational reviews to ensure that procedural protections for inmates have been followed and that inmates are housed according to BOP policies. However, as discussed below, BOP does not have requirements in place to monitor similar compliance for ADX-specific policies. BOP’s monitoring policies for each type of segregated housing unit are described below. SHU. BOP policies require that PRD monitor SHU policies and review documentation of 10 percent of inmates held in SHUs in each facility. BOP policies also require PRD to select 10 inmate files from those held in SHU disciplinary segregation for a review of procedural protections and disciplinary procedures. Further, BOP requires PRD to monitor SHU specific policies that cover additional requirements to monitor conditions of confinement and procedural protections. BOP incorporates ACA monitoring standards as part of its SHU policy. See figure 6 for a photographic example of a SHU cell, which PRD is required to monitor to ensure the prison provides conditions of confinement for inmates held in SHUs. SMU. According to BOP policy, PRD is required to monitor a prison’s compliance with SMU-specific policies, including those SMU-specific policies that require prisons to provide specific conditions of confinement and procedural protections. PRD reviews are required to check compliance with nine SMU-specific policies such as providing inmates with 5 hours of recreation per week; an opportunity to shower a minimum of three times per week; and access to visits, correspondence, and medical and mental health care. According to BOP officials, BOP incorporates ACA monitoring standards as part of its SMU policy. BOP also requires PRD to review 25 SMU inmate case files that cover conditions of confinement for SMU inmates. See figure 7 for a photographic example of a SMU recreation area, which PRD is required to monitor to ensure the prison provides conditions of confinement for inmates held in SMUs. ADX. ADX inmates are included in any PRD program review that covers the entire Florence prison complex. While PRD has some oversight over ADX, PRD does not monitor ADX to the same degree that it monitors SHUs and SMUs. According to BOP officials, except for inmates held in ADX-SHUs, PRD is not required to monitor ADX-specific conditions of confinement–such as exercise, telephone, and visitation–as they do for SHUs or SMUs. For example, PRD reviews do not check for compliance with ADX-specific policies, such as whether inmates are afforded a minimum of 7 hours of recreation per week or the minimum of one 15- minute phone call per month in the Control Unit. The ADX-specific policies for recreation, telephone calls, and visits allowed vary in each of the three ADX housing units: the Control Unit, the Special Security Unit, and the Step Down Units. (See fig. 2). According to BOP officials, PRD does not have monitoring requirements for ADX- specific policies because BOP management has not identified ADX as a high-risk area that needed specific monitoring requirements due to other oversight mechanisms. For example, BOP HQ reviews the referral and placement of all inmates in ADX, including a review of each inmate placed in the Control Unit every 60 to 90 days to determine the inmate’s readiness for release from the unit. BOP officials also told us that ADX- specific policies are monitored locally by ADX officials. However, conditions of confinement in ADX housing units are generally more restrictive than those in SHUs and SMUs. For example, unlike SHUs and SMUs, nearly all inmates in ADX are confined to single cells alone for about 23 hours per day. Also, although BOP HQ has mechanisms to monitor some procedural protections, and ADX officials locally monitor ADX-specific policies, BOP HQ lacks oversight over the extent to which ADX staff are in compliance with many ADX-specific requirements related to conditions of confinement and procedural protections to the same degree that it has for SHUs and SMUs. According to PRD officials, PRD does not assess the extent to which ADX provides conditions of confinement or procedural protections as required under ADX policy and program statements because it is not required to do so. As a result, PRD cannot report to BOP management on the extent of compliance with these ADX-specific requirements. With such oversight, BOP headquarters would have additional assurance that inmates held in BOP’s most restrictive facility are afforded their minimum conditions of confinement and procedural protections. See figures 8 and 9 for examples of a cell in the ADX housing unit and recreation areas, which PRD is required to monitor to some extent to ensure the prison provides conditions of confinement for inmates held in ADX. Standards for Internal Controls in the Federal Government states that an effective internal control environment is a key method to help agency managers achieve program objectives. The standards state, among other things, that monitoring activities are an integral part of an entity’s planning, implementing, reviewing, and accountability for stewardship of government resources and achieving effective results. Specific requirements for PRD to monitor ADX-specific policies to the same degree that these requirements exist for SHUs and SMUs could help provide BOP HQ additional assurance that ADX officials are following BOP policies to hold inmates in a humane manner, in its highest security, most restrictive facility. The Acting Assistant Director of PRD agreed that developing such requirements would be useful to help ensure these policies are followed. BOP has a mechanism in place to centrally monitor how prisons implement most segregated housing unit policies. However, given a selection of PRD monitoring reports from 20 prisons and our independent analysis of inmate case files at two federal prisons, we identified concerns related to how facilities are documenting that inmates received their conditions of confinement and procedural protections, which are described below. PRD monitoring reports. We reviewed 45 PRD monitoring reports from 20 prisons that assessed compliance at general population units and SHUs and SMUs. PRD identified deficiencies in 38 of these reports, including documentation concerns in 30 reports. As part of our review, we found PRD monitoring reports identified deficiencies, such as missing SHU forms, or incomplete documentation that inmates held in segregation for at least 22 hours per day received all their meals and exercise as required. For example, segregated inmates in SHUs and SMUs are entitled the opportunity to have 1 hour of exercise per day but the documentation at these prisons did not clearly indicate that these standards were always observed. According to our review of 45 PRD reports from 20 prisons, we found that BOP rated 15 prisons as generally compliant with both BOP policies and policies specific to SHUs and SMUs. However, while BOP found that these prisons were generally in compliance with segregated housing unit policies, most of these prisons had some deficiencies. For example, our analysis of the PRD reports found that, in 38 of the 45 reviews, PRD identified deficiencies such as missing documentation, monitoring rounds not being consistently conducted, or inmate review policies not fully implemented. (See fig. 10 for common deficiencies.) To assess how PRD staff conducted monitoring at prisons, we observed PRD conducting reviews at one prison complex that included two medium and high security BOP facilities with SHUs. For example, we found that PRD staff (1) performed monitoring rounds at SHUs, (2) reviewed log books, and (3) reviewed inmate files, to determine if the facilities followed the required procedural protections steps. Given our observations, we concluded that PRD staff monitored these facilities’ compliance with BOP policies, as called for in PRD’s monitoring guidelines. Independent analysis of inmate case files. We also conducted an independent analysis of BOP compliance with SHU-specific policies at three facilities. Specifically, we reviewed a total of 51 segregated housing files for inmates held in administrative detention and disciplinary-SHU for fiscal years 2011 and 2012 at three facilities. We found that these three facilities were generally complying with BOP policies related to inmate placement and ensuring procedural protections for inmates placed in SHU-disciplinary segregation, in light of our review of these selected files. For example, 42 out of 51 inmate case files we analyzed provided reasons for inmate placement in SHUs, as required by BOP policies. However, of the 35 case files we reviewed for inmates held in administrative-SHU – in which we reviewed conditions of confinement, monitoring, and procedural protections –only 4 files consistently documented that the inmates were afforded their rights to recreation and procedural protections. For example, these 4 files consistently documented that these inmates in SHUs received 1 hour of exercise a day, 5 days per week, and that the inmates’ status in segregation was consistently reviewed within 7 days of being placed in the SHU, as well as meals and recreation, as required by BOP policy. The remaining 31 of the 35 files did not consistently document that the inmates were afforded these rights. (See table 1.) Given (1) our review of 45 BOP monitoring reports and (2) our independent analysis of 51 selected inmate case files at three facilities, we found that that the facilities did not consistently document conditions of confinement and procedural protections as required under BOP policy guidelines. For example, 38 out of the 45 reports identified deficiencies such as missing documentation, monitoring rounds not being consistently conducted, or inmate review policies not fully implemented. In our independent analysis of 51 segregated housing unit case files, we reviewed 35 files focused on determining if BOP regularly monitors inmates’ status, conditions of confinement, and procedural protections, and found documentation-related concerns in 31 out of 35 files. While our selection of reports and site visits cannot be generalized to all BOP facilities, the extent of documentation concerns indicates a potential weakness with facilities’ compliance with BOP policies. Without proper documentation of inmates’ rights and conditions of confinement, neither we nor BOP HQ can determine whether facility staff have evidence that facilities complied with policies to grant inmates exercise, meals, and other rights, as required. In January 2013, BOP officials agreed with our finding that BOP monitoring reports regularly identified problems with documentation. BOP officials said that they believed these were documentation problems caused by correctional officers forgetting to document the logs, and not instances where inmates were not getting their food, exercise, and procedural protections granted under BOP guidelines. They noted that inmates can use the formal grievance process, called the Administrative Remedy process, if they believe they have not been granted these rights. According to BOP officials, in December 2012, BOP began using a new software program, called the SHU application in all SHUs and SMUs. BOP officials told us that this new software program could improve the documentation of the conditions of confinement in SHUs and SMUs, but acknowledged it may not address all the deficiencies that we identified. Because this new software was recently implemented, and BOP did not provide evidence to the extent that it addressed the documentation deficiencies, we cannot determine if it will mitigate the documentation concerns. In addition, BOP does not have a plan that provides the specific objectives of the software program, how it will address the documentation deficiencies, or specific steps BOP will use to verify that the software will resolve the documentation problems we identified. According to best practices in project management, the establishment of clear, achievable objectives can help ensure successful project completion. A plan that clarifies the objectives and goals of the new software program and the extent to which they will address documentation issues we identified, along with time frames and milestones, could help provide BOP additional assurance that inmates in these facilities are being treated in accordance with BOP guidance. BOP does not regularly track or calculate the cost of housing inmates in segregated housing units. BOP computes costs by facility or complex, and does not separate or differentiate the costs for segregated housing units, such as SHUs, SMUs, and ADX that may be within the complex. For example, Federal Correctional Complex (FCC) Florence in Florence, Colorado, contains four different facilities, including ADX, one high security, one medium security, and one minimum security facility, as well as different types of housing units within most facilities. Specifically, within the high security facility, there is a SHU and a SMU. According to BOP officials, segregated housing unit costs are not separated because most of the costs to operate a facility or complex apply to inmates housed in all housing units within the facility or complex.reported that inmates in a segregated housing unit within a facility share the same costs under the facility’s total obligations, such as utilities, food services, health services, and facility maintenance, among other things. BOP officials also stated that BOP aggregates the cost data for an entire BOP officials further facility or complex to reduce paperwork and streamline operations. BOP also computes an overall average daily inmate per capita cost by security level for each fiscal year. See table 2 for BOP’s computation of average daily inmate per capita costs by security level for fiscal year 2012. BOP officials stated that segregated housing units are more costly than general prison population housing units because segregated housing units require more resources—specifically staff— to operate and maintain. According to BOP officials, the staff-to-inmate ratio in segregated housing is significantly higher than in the general prison population, which makes segregated housing units more expensive to operate. For example, at one high security facility we visited, we estimated there was an average of 41 inmates to one correctional officer in the SHU during a 24-hour period. This contrasts to an inmate-to- correctional-officer ratio of about 124:1 in general population housing units in the same facility during a 24-hour period. BOP officials at facilities we visited stated that ADX, SMUs, and SHUs require more staff than general population housing because most of the inmates are confined to their cells for approximately 22 to 24 hours per day. As a result, they are dependent on the correctional officers for many of the activities that those in the general inmate population do for themselves. For example, at least two correctional officers are needed to escort SHU and SMU inmates to showers and to recreation cells. Some high security inmates at SMUs require a three-officer escort each time they leave the cell. Staff are required to bring meals to inmates in their cells in SHUs, SMUs and ADX three times each day. In addition, staff are also required to provide laundry services, daily medical visits, and weekly psychological, educational, and religious visits to inmates in their cells in SHUs, SMUs and ADX. In contrast, inmates in general population units can generally access services in other areas of the facility freely, and therefore can perform these activities without assistance from correctional officers. On January 31, 2013, BOP budget officials provided a snapshot estimate that compares the daily inmate per capita costs in fiscal year 2012 at ADX, a sample SMU, a SHU at a sample medium security facility, and a SHU at a sample high security facility. For example, BOP estimates the daily inmate per capita costs at ADX are $216.12 compared with $85.74 at the rest of the Florence complex. According to BOP estimates, the inmate per capita costs at the sample SMU facility are $119.71, which are higher than per capita costs in general population in BOP’s sample high security facility, which are $69.41.(see table 3). For its estimates of the costs to operate SHUs, BOP selected Federal Correctional Institution (FCI) Beckley for a sample medium security facility and U.S. Penitentiary (USP) Lee for a sample high security facility. According to a senior BOP official, BOP did not select these facilities because of costs but because these facilities are a “typical” medium security and high security facility. The estimated daily costs per inmate at these two sample facilities in table 3 are lower and not directly comparable to the system-wide average daily costs per inmate for medium and high security facilities, as shown in table 2. Please see appendix I for a description of how BOP calculated its estimated costs. According to these cost estimates that BOP provided, we estimated that the total cost of housing 1,987 inmates in SMUs in fiscal year 2012 was $87 million. If these inmates were housed in a sample BOP medium or high security facility, the total cost would have been about $42 million and $50 million, respectively. Also, given BOP estimates, we calculated that the total cost to house 435 inmates in ADX in fiscal year 2012 was about $34 million. If these inmates were housed in a medium security or high security facility, the total costs would have been about $9 million and $11 million, respectively. Moreover, the estimated costs of housing 5,318 SHU inmates at the cost estimated by BOP for the sample medium security facility, FCI Beckley, would be $152 million, which is more expensive than housing inmates in medium security general population housing units which would cost an estimated $112 million. Similarly, the estimated cost of housing 2,701 SHU inmates at the cost estimated by BOP for the sample high security facility, USP Lee, would be $92 million, compared with housing inmates in high security general population housing units, which costs an estimated $69 million. According to BOP officials, the use of SMUs can reduce BOP costs. The officials said that SMUs resulted in reduced assault rates and a reduction in the number of facility lockdowns. Senior BOP budget officials noted that there are significant financial costs associated with keeping disruptive inmates in the general prison population who can cause a serious incident and lead to costly lockdowns. For example, according to BOP data, from fiscal years 2007 through 2011, lockdowns and disturbances led to losses totaling about $23 million. These officials explained that, during a lockdown, a facility has to use its entire staff to perform security and custodial duties at the expense of other duties. BOP has not assessed the extent to which all three types of segregated housing units—SHUs, SMUs, and ADX— impact institutional safety for inmates and staff.impact of segregation, BOP senior management and prison officials told us that they believed segregated housing units were effective in helping to maintain institutional safety. According to BOP officials, SMUs helped Although BOP has not completed an evaluation of the reduce assault rates BOP-wide and reduced the number of lockdowns due to conflict and violence from 149 in fiscal year 2008 to 118 in fiscal year 2010, during a period when the overall inmate population increased. BOP, however, could not provide documentation to support that these reductions resulted from the use of SMUs. Although state prison systems may not be directly comparable to BOP, there may be relevant information from efforts states have taken to reduce the number of inmates held in segregation. Five states we reviewed have reduced their reliance on segregation—Colorado, Kansas, Maine, Mississippi, and Ohio—prompted, according to state officials, by litigation and state budget cuts, among other reasons. These states worked with external stakeholders, such as classification experts and correctional practitioners, to evaluate reasons why inmates were placed in segregation and implemented reforms that reduced the number of inmates placed in segregated housing units. After implementing segregated housing unit reforms that reduced the numbers of inmates held in segregation, officials from all five states we spoke with reported little or no adverse impact on institutional safety. While these states have not completed formal assessments of the impact of their segregated housing reforms, officials from all five states told us there had been no increase in violence after they moved inmates from segregated housing to less restrictive housing. In addition, Mississippi and Colorado reported cost savings from closing segregated housing units and reducing the administrative segregation population. For example, Colorado closed a high security facility in 2012, which state officials reported led to cost savings of nearly $5 million in fiscal year 2012 and $2.2 million in fiscal year 2013. According to Colorado officials, segregation reform efforts helped lead to the closure of this high security facility. In Mississippi, reforms in segregation also led to the closure of a supermax facility in early 2010, which Mississippi Department of Corrections officials reported saved the state nearly $6 million annually. All five states changed their criteria for placing inmates in segregated housing, which helped them reduce their segregated inmate populations. Of the five states, three—Colorado, Mississippi, and Ohio—reviewed and changed the classification for placing inmates in administrative SHUs and two—Kansas and Maine—established new or modified the criteria for placement of inmates in SMUs. For example, in 2007, Mississippi found that approximately 800 inmates (or 80 percent) did not meet its revised criteria for placement in administrative segregation. Before reforms, inmates would generally be transferred directly from admittance to administrative segregation without consideration of the inmate’s offense and would generally remain in segregation without regular review of the inmate’s status irrespective of whether the inmate had committed any serious misconduct. After implementing reforms, Mississippi adopted new criteria that stated inmates could be held in administrative segregation only if they committed serious infractions, were active high-level members of a gang, or had prior escapes or escape attempts from a secure facility. According to Mississippi officials, this reform did not lead to an increase in violence, assault rates, or serious incidents. In 2011, after a study with external stakeholders that reviewed and recommended changes to Colorado’s administrative segregation operations, Colorado revised its policies for placement of inmates in segregated housing. Subsequent to the external study’s completion, Colorado began reviewing all offenders that had been in administrative segregation for longer than 12 months and found that nearly 37 percent or about 321 inmates in administrative segregation could be moved to close custody general population. After Colorado revised its classification criteria and increased oversight of the inmate review process, the number of inmates held in segregation decreased from 60 per month in 2011 to approximately 20 to 30 per month in 2012. According to Colorado state officials, these reforms did not lead to an increase in violence. In addition, in 2011, Maine’s Department of Corrections reformed its inmate placement policies for SMUs. After changing the criteria and classification for holding inmates in SMUs, Maine significantly reduced the number of inmates in its 132-cell SMU, by closing a 50-cell section of its supermax SMU. Inmates removed from the SMU were reintegrated into a less restrictive, general population setting, and according to officials, there was no increase in incidents of violence. While the policies and procedures for segregated housing vary between states and BOP, and their experiences may not be directly comparable, there may be lessons for BOP in the states’ experiences reducing their reliance on segregated housing. According to BOP officials, BOP generally uses larger states, such as California, Texas, or New York, for comparison, and that the five states included in our report may not be comparable with BOP. BOP officials also told us, in response to the findings from these states, that BOP has more comprehensive classification criteria, reviews, and procedural protections than the states. As a result, they indicated that BOP might not have the same reductions in costs and inmates in SHUs found at the state level. However, without an assessment of the impact of segregated housing, BOP cannot determine the extent to which placement of inmates in segregation contributes to institutional safety and security. Such an assessment is also important to inform DOJ and congressional decision making about the extent to which segregation meets BOP’s key programmatic goals for institutional safety. Our past work and the experience of leading organizations have demonstrated that measuring and evaluating performance allows organizations to track progress they are making toward intended results—including goals, objectives, and targets they expect to achieve—and gives managers critical information on which to base decisions for improving their programs. Given that BOP maintains data on assault, violence, and lockdown rates across all prison facilities, BOP senior officials reported that evaluating the relationship between assault rates and segregation might help them evaluate the impact of segregated housing. An assessment of the effectiveness of segregation, including consideration of practices across local and state correctional systems, could better position BOP to understand the extent to which different types of segregated housing units meet BOP mission goals to ensure institutional safety for inmates and staff. On January 31, 2013, BOP officials told us that the BOP Director had authorized the solicitation of an independent review of segregated housing and, once a contract is awarded, they expect the study to be completed during fiscal year 2014. BOP officials explained that the study—with the objective of identifying improvement in BOP’s practice and policy—is to review segregated housing, including identifying best practices across the correctional spectrum, such as inmate management, and mental health, among other areas. According to BOP, the statement of work for this solicitation requires the recipient to provide an assessment of the use and practices of segregated housing units in BOP. However, it is unclear to what extent the review will assess the extent that segregated housing units contribute to the safety and security of inmates and staff and ensures that BOP meets its mission goals. BOP psychologists are required to provide an initial intake screening of each inmate within 30 days of the inmate’s arrival in a BOP facility. Moreover, BOP requires that psychological staff visit inmates in segregated housing on a weekly basis and provide psychological assessments after 30 consecutive days in the SHUs, SMUs, and ADX Control and Special Security Units. According to BOP’s Psychology Services Branch Administrator, these weekly visits and psychological assessments provide staff an opportunity to intervene when and if they find that an inmate is having difficulty in segregation. BOP also has a suicide prevention program, which includes training for all staff and additional supplemental training for staff working in segregation. In addition, inmates receive information on suicide prevention upon their arrival at an institution and the availability of mental health services while in segregated housing. BOP also develops “hot list” memos that are posted in SHUs to help inform staff of inmates who may have specific mental health concerns or suicidal tendencies. While BOP conducts regular assessments of mental health of inmates, BOP has not evaluated the impact of long-term segregation on inmates. BOP’s Office of Research and Evaluation (ORE) officials said they have not studied the impact of long-term segregation on inmates because of competing priorities related to studying impacts of prisoner reentry, drug treatment, and recidivism. are methodological concerns related to finding an appropriate control group of inmates to compare with inmates held in segregation. We recognize the methodological limitations; however, a 2010 Colorado study that was funded by DOJ identified a comparison group of inmates in order to evaluate the psychological impact of segregation. BOP’s Office of Research and Evaluation (ORE), which reported that BOP is in the early stages of a study dedicated to evaluating the impact of SMUs on offenders. BOP does not yet have an estimated completion date for the study. BOP officials, including psychologists, at four of the six facilities we visited reported little or no adverse impact of segregation on inmates. Some of these psychologists and BOP HQ officials cited the 2010 DOJ-funded study of the psychological impacts of solitary confinement in the Colorado state prison system. This study showed that segregated housing of up to 1 year may not have greater negative psychological impacts than non segregated housing on inmates. While the DOJ-funded study did not assess inmates in BOP facilities, BOP management officials told us this study shows that segregation has little or no adverse long-term impact on inmates. BOP’s Psychology Services Branch Administrator explained that the impact is dependent on each individual inmate. For example, she told us that a small number of inmates with mental disorders, such as schizophrenia, actively seek placement in segregation, and some appear to function reasonably well in this environment. We reviewed several studies on the impact of segregated housing on inmate mental health, and several suggest that long-term segregation or solitary confinement can cause significant adverse impacts. See appendix I for information about criteria used to select studies in our review. These reports describe possible adverse impacts of segregation, including exacerbation or recurrence of preexisting illnesses, illusions, oversensitivity to stimuli, and irrational anger, among other symptoms, although it is unclear how applicable the conditions studied are to BOP segregated housing. Other reports addressed the possible effect of segregation on other outcomes, such as recidivism or new convictions after release from prison. Few reports, however, incorporate a comparison between inmates in segregation versus inmates not in segregation, limiting the ability to draw conclusions about the impact of segregation. A comparison of inmates held in segregation with those in general population would be important for understanding the extent to which any adverse psychological impacts are unique to long-term segregation. While most BOP officials told us there was little or no clear evidence of mental health impacts from long-term segregation, BOP’s Psychology Services Manual explicitly acknowledges the potential mental health risks of inmates placed in long-term segregation. Specifically, it states that BOP “recognizes that extended periods of confinement in Administrative Detention or Disciplinary Segregation Status may have an adverse effect on the overall mental status of some individuals.” In addition, according to BOP’s mission statement, BOP protects society by confining offenders in prisons that are, among other things, safe and humane. In our prior work, we reported that DOJ stresses the importance of evidence-based knowledge in achieving its mission. Specifically, DOJ’s Office of Justice Programs (OJP) supports DOJ’s mission by sponsoring research to provide objective, independent, evidence-based knowledge to meet the challenges of crime and justice, such as the 2010 Colorado state prison system study. In addition, BOP’s ORE is responsible for conducting research and evaluation of BOP programs, but ORE has not conducted studies on the impact of long-term segregation on inmates. Further, according to generally accepted government auditing standards, managers should evaluate programs to provide external accountability for the use of public resources to understand the extent to which the program is fulfilling its objectives. To help BOP HQ assess inmates placed in segregation, BOP maintains a psychology data system (PDS) that is used to document all mental health screenings and staff visits by psychologists and treatment specialists, and a Bureau Electronic Medical Record (BEMR) that documents all staff visits by physicians and medication provided. Given that BOP’s PDS and BEMR systems maintain data on the mental health of inmates and BOP’s Psychological Services Manual states there may be potential adverse effects from long-term segregation, a study that uses existing information to assess the impact of segregation on inmates would better position BOP to understand the effects of segregation, including any related to inmates’ mental health. BOP’s Psychology Services Branch Administrator agreed that such a study would be useful. As of January 2013, BOP announced that the bureau is considering the development of procedures for conducting individualized mental health case reviews of inmates held in long-term segregation, i.e., inmates housed in SHUs or the ADX Control Unit for more than 12 continuous months and inmates who fail to progress through the SMU or ADX General Population Step Down phases in a timely manner. These reviews would be conducted at BOP HQ, and if the review found any concerns, the reviewers would contact prison staff to discuss strategies to reduce or eliminate the identified mental health concerns. However, the proposal is still under consideration, has not yet been implemented across all prison facilities, and we cannot determine the extent to which this proposal will systematically assess the long-term impact of segregated housing on inmates. Over the past 5 years, the number of BOP inmates in segregated housing has grown at a faster rate than the general inmate population. With more inmates held under more restrictive conditions, often for months or years at a time, segregated housing represents an important part of BOP’s effort to achieve its primary goal of confining inmates in a safe, secure, and cost-efficient environment. While BOP has a mechanism to centrally monitor many of its segregated housing unit policies, BOP does not centrally monitor the policies specific to its most restrictive segregated prison, the ADX facility. As a result, BOP has less assurance that ADX staff consistently follows ADX-specific policies to the same degree that these requirements are followed for SHUs and SMUs. We also found that prison officials were not consistently documenting that inmates’ conditions of confinement, such as food and exercise privileges, were being met. BOP has taken initial steps toward addressing these documentation issues by implementing new software that may help track the monitoring of SHUs and SMUs. However, BOP has not developed a plan to clarify the objectives and goals of the new software program, with time frames and milestones that explain the extent to which it will address documentation issues we identified. BOP officials believe that segregated housing helps maintain institutional safety. Given BOP’s increased reliance on segregated housing and the higher costs associated with its use, it is notable that BOP has not studied the impact of segregated housing on inmates, staff, and institutional safety. As BOP considers options for conducting a study of segregated housing, BOP may want to consider lessons learned from some state initiatives that reduced the number of inmates held in segregation without significant, adverse impacts on violence or assault rates. In addition, BOP’s own policies recognize that long-term segregation may have a detrimental effect on inmates. While BOP does regularly check the mental health of inmates in segregated housing, BOP has not conducted an assessment of the long-term impact of segregation on inmates. To improve BOP’s ability to centrally oversee the implementation of segregated housing policies, we recommend that the Director of the Bureau of Prisons take the following two actions: (1) develop ADX-specific monitoring requirements and (2) develop a plan that clarifies the objectives and goals of the new software program, with time frames and milestones, and other means, that explains the extent to which the software program will address documentation concerns we identified. To ensure that BOP’s use of segregated housing furthers BOP’s goal to confine inmates in a humane manner and contributes to institutional safety without having a detrimental impact on inmates held there for long periods of time, we recommend that the Director of the Bureau of Prisons take the following two actions: (1) ensure that any current study to assess segregated housing units also includes an assessment of the extent that segregated housing contributes to institutional safety, and consider key practices that include local and state efforts to reduce reliance on and the number of inmates held in segregated housing and (2) assess the impact of long-term segregation on inmates in SHUs, SMUs, and ADX. We provided a draft of this report to DOJ for its review and comment. BOP provided written comments on this draft, which are reproduced in full in appendix IV. BOP concurred with all of our recommendations. BOP also provided technical comments on the report on April 19, 2012, which we incorporated as appropriate. BOP concurred with the first recommendation that BOP develop ADX- specific monitoring requirements. BOP stated that it will conduct a Management Assessment to identify aspects of the Control Unit at ADX that are vulnerable to violations of policy. BOP further noted that it would develop guidelines, as appropriate, to be incorporated into the program review guidelines. If fully implemented across all ADX housing units, BOP’s planned actions will address the intent of this recommendation. BOP concurred with the second recommendation that BOP develop a plan with timeframes and milestones, to explain the extent the software program will address documentation concerns. BOP stated that the goal of the new software program is to help ensure compliance with requirements to maintain accurate and complete records on conditions and events in segregated housing units. BOP indicated that they will conduct a program review by September 30, 2013 to determine if the SHU documentation deficiencies have been reduced. If fully implemented, BOP’s planned actions will address the intent of this recommendation. BOP concurred with the third recommendation that BOP ensure any current study to assess segregated housing units also includes an assessment of the extent that segregated housing contributes to institutional safety. BOP stated that the current scope of work for the Special Housing Review and Assessment will include an assessment of how segregated housing units contribute to institutional safety. BOP further noted that the scope of work will include consideration of key practices of local and state correctional systems. If fully implemented, BOP’s planned actions will address the intent of this recommendation. BOP concurred with the fourth recommendation that BOP assess the impact of long-term segregation on inmates in SHUs, SMUs, and ADX. BOP stated that the assessment of mental health of inmates is consistent with its public safety mission. BOP stated that BOP will develop and distribute an expanded mental health screening tool for psychology staff, which will help conduct a longitudinal assessment of: (1) inmates housed in SHUs or the ADX Control Unit for more than 12 continuous months; and (2) those inmates who fail to progress through the SMU or ADX General Population Step Down phases in a timely manner. In addition, BOP stated that its review of segregated housing units will include an evaluation of inmate mental health history and a review of BOP’s mental health assessment process. If fully implemented, BOP’s planned actions will address the intent of this recommendation. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Attorney General, Director of the Bureau of Prisons, selected congressional committees, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. Should you or your staff have any questions concerning this report, please contact David Maurer at (202) 512-9627 or by email at [email protected]. Contact points from our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. Our objectives for this report were to address the following questions: 1.What were the trends in the Bureau of Prison’s (BOP) segregated housing unit population and number of cells from fiscal year 2008 through February 2013? 2.To what extent does BOP centrally monitor how individual facilities document and apply policies guiding segregated housing units? 3.To what extent has BOP assessed the costs to operate segregated housing units and how do the costs to confine an inmate in a segregated housing unit compare with the costs of confining an inmate in a general inmate population housing unit? 4.To what extent does BOP assess the impact of segregated housing on institutional safety and the impacts of long-term segregation on inmates? Overall, to address our questions, we analyzed BOP’s statutory authority and policies and procedures (e.g., BOP’s inmate placement, procedural protections, and general conditions of confinement for segregated housing units—Special Housing Units (SHU), Special Management Units (SMU), and the Florence Administrative Maximum facility (ADX)—and Communications Management Units (CMU)). BOP considers CMUs as self-contained general population housing units. However, since CMU inmates are separated from general inmate population and have restrictive conditions, such as 100 percent of their communications monitored and noncontact visits, we include CMUs within the scope of our review, as described in appendix II. To address the first question, we obtained and analyzed BOP’s number of cells and inmate population data for each type of segregated housing unit and the CMUs. We focused our data analysis on the period of fiscal year 2008 through February 2013 or the past five fiscal years to the most recent data available. We assessed the reliability of the inmate population and number of cells data by (1) participating in an electronic demonstration of the SENTRY database that BOP uses to generate required inmate population, (2) reviewing existing information about the data and the system that produced them, (3) examining the data for obvious errors and inconsistencies, and (4) interviewing BOP officials knowledgeable about the data. We determined that the required data elements were sufficiently reliable for the purposes of this report. To address the second question, we analyzed BOP’s policies and procedures pertinent to the monitoring of individual prisons’ compliance with segregated housing unit policies. To observe the conditions of confinement, procedural protections, and inmate placement in segregated housing, we conducted visits to 6 of 119 BOP federal institutions. We chose these institutions because of different types of segregated housing units and varying security levels they contain. As shown in table 4, the six prisons we visited cover the three main types of segregated housing units—SHUs, SMUs, and ADX—as well as CMUs. During the site visits, we interviewed institutional management officials and toured the prison to observe inmate housing, recreational areas, food service, and educational and vocational programming. We also interviewed officials from BOP’s Program Review Division (PRD), which leads monitoring reviews, and officials from BOP’s Correctional Programs Division (CPD), which has primary responsibility for inmate placement and procedural policies at segregated housing units. Because we did not visit all BOP facilities and did not randomly select the facilities we visited, our results are not generalizable to all BOP facilities. However, we selected the sites to provide perspectives on different types of segregated housing units and varying security levels, which were useful in understanding population trends, BOP monitoring of conditions of confinement and procedural policies, cost, and the impact of segregated housing. Further, for our second question, we assessed BOP’s monitoring for each type of segregated housing unit by reviewing monitoring policies, guidelines, and reports. We analyzed BOP’s segregated housing unit policies and monitoring guidance and compared them against criteria in Standards for Internal Control in the Federal Government. We also assessed the methodology and system BOP employs to monitor, identify, and address deficiencies at prisons; we reviewed 45 of 187 PRD monitoring reports from 20 of 98 facilities that PRD monitored during the period from fiscal years 2007 to 2011. We requested a selection of PRD correctional services monitoring reports, which BOP provided for a variety of facilities during this time period. In addition, we requested monitoring reports for the facilities we visited for our site visits. We also reviewed 43 follow-up monitoring reports related to the 45 monitoring reports to determine the extent that prisons resolved deficiencies identified in the monitoring reports. We reviewed these PRD monitoring reports to summarize common findings and deficiencies relevant to our engagement related to cleanliness, conditions of confinement, documentation, procedural protection, monitoring, policy, security protocols, timeliness, and training. We developed a methodology for selecting these areas to assess the extent that BOP monitored conditions of confinement, procedural policies, and other key issues identified in the monitoring reports. One analyst reviewed each report and highlighted any common findings and deficiencies noted in the report. A second analyst independently verified the findings and deficiencies identified. We also interviewed PRD officials responsible for doing on-site monitoring, and interviewed senior BOP officials who are responsible for developing monitoring policy guidance to understand the degree and methodology of monitoring used. To provide an independent analysis of BOP compliance with segregated housing unit policies at selected prisons, we developed a data collection instrument (DCI) according to BOP’s monitoring policies, and guidance and questions. Our DCI is similar to questions used during PRD periodic on-site monitoring reviews of segregated housing unit policies at SHUs, SMUs, and general prison policies at CMUs. We selected two of the six institutions we visited—FCC Terre Haute and USP Marion. At each institution, we selected a random sample of case files from fiscal years 2011 to 2012, of inmates currently housed in segregated housing units— including SHU-administrative detention, SHU-disciplinary segregation, and CMUs—totaling 61 files. These 61 inmate case files include 51 SHU inmate case files, and 10 CMU inmate case files. We selected the inmate case files from SHUs using the same sample size BOP PRD inspectors use when conducting correctional services monitoring reviews of SHUs. For example, according to BOP PRD monitoring guidance for correctional services reviews of SHUs, PRD inspectors are to review documentation of 10 percent of inmates currently in SHU to determine whether the inmates are afforded specific conditions of confinement, inmates’ placement and status in SHU are regularly reviewed, and other SHU policies are followed. Accordingly, we selected the case files of 10 percent of inmates in SHUs in the two institutions for our analysis. According to PRD monitoring guidance for the review of disciplinary-SHU, PRD inspectors are to review 10 disciplinary hearing packets. For our review, we selected 17 disciplinary inmate case files and hearing packets because we were interested in understanding the extent to which BOP provided procedural protections for inmates held in disciplinary-SHU. We randomly selected the inmate case files from both SHUs and CMUs from a roster of inmates in each SHU or CMU at the time of our visit. Although our selection of files was not generalizable to all inmates in all types of segregated housing units, it provided insights into whether these institutions were following BOP policy. We used the DCIs to extract information relevant to BOP’s monitoring policies, inmate placement, conditions of confinement and procedural protections for inmates held in SHU-administrative detention, SHU-disciplinary segregation, and CMUs. One analyst summarize information from the inmate case file, and a second analyst verified the DCI information collected. A third analyst reviewed and summarized information collected from the DCIs. In addition, we observed PRD staff conduct on-site monitoring of SHUs and CMU at two facilities. We also reviewed information and documentation received related to BOP’s new software program, that includes the SHU application, compared against best practices for project management and criteria in BOP’s monitoring documentation policies. For example, we reviewed implementation dates and plans, training materials used across BOP facilities, and analyzed BOP monitoring policies, and interviewed PRD officials to understand to what extent the new SHU application addresses any documentation concerns we identified during our engagement. To address the third question, we reviewed BOP fiscal year 2012 average inmate per capita costs for prisons at each major security level: high security, medium security, low security, and minimum security levels. These inmate per capita costs cover all costs associated with the day-to- day operation of the entire institution, including health services, uniform, food, programming, and contractual services and equipment costs related to each prison. According to BOP, the inmate daily per capita costs are calculated as total obligations as reported in BOP’s Salaries and Expenses appropriations account divided by total inmate days. Further, in January 2013, BOP provided a snapshot estimate of fiscal year 2012 inmate per capita costs broken out by segregated housing versus general population housing at four institutions: (1) USP Lewisburg, a SMU facility; (2) FCC Florence, which includes ADX Florence; (3) a sample medium security facility (FCI Beckley); and (4) a sample high security facility (USP Lee), which both include SHUs. We interviewed BOP officials from the Administration Division, who have responsibility over financial and facility management, about their processes for developing the estimates. According to senior BOP officials, BOP selected these facilities because they considered them “typical” medium security and high security facilities. We found BOP’s segregated housing versus general population housing inmate per capita cost data to be sufficiently reliable for the purposes of presenting an overview of possible costs. For illustration purposes, we also used BOP’s estimated segregated housing versus general population housing inmate per capita cost data, combined with BOP inmate population data, to estimate the costs of housing the number of inmates in ADX, all SMUs, and all SHUs, BOP-wide, as of fiscal year 2012 compared with the costs to house these same amount of inmates in general population housing units for fiscal year 2012. For example, to estimate the total costs of housing the total SMU inmate population in SMUs, BOP-wide, for fiscal year 2012, we multiplied BOP’s estimated daily inmate per capita costs for USP Lewisburg SMU by the total SMU population times 366 days, or the number of calendar days in 2012. To estimate the costs of housing this same number of SMU inmates in general population housing in a medium security or high security facility, we multiplied the total SMU population, BOP-wide, by BOP’s estimated daily inmate per capita costs for the sample medium facility, FCI Beckley, times 366 days, and estimated daily inmate per capita costs for the sample high security facility, USP Lee, times 366 days, respectively. To address the fourth question, we reviewed BOP’s policies, including program objectives, for each segregated housing unit and policies governing the provision of mental health services to inmates in segregated housing units. We also reviewed BOP lockdown data from fiscal year 2008 through fiscal year 2012. We also interviewed officials from BOP’s Correctional Programs Division (CPD), which also includes the Psychology Services Branch that is responsible for mental health services. We also interviewed officials from BOP’s Office of Research and Evaluation (ORE), who produce reports and research corrections-related topics. During these interviews, we discussed the lack of BOP studies that assess the impact of segregated housing units on institutional safety and inmates and staff, and their views on the impact of long-term segregation, including their views on the impact of segregation on inmates, including those with mental illness. We also discussed the impacts of segregation with officials from the Council of Prison Locals, the union that represents all nonmanagement staff working in BOP facilities. To identify actions states have taken regarding segregated housing that may be relevant to BOP, we reviewed actions taken by five states— Colorado, Kansas, Maine, Mississippi, and Ohio. We selected these five states because they (1) were involved in addressing segregated housing reform and (2) had taken actions to reduce the number of inmates held in segregation. For each of the five selected states, we reviewed relevant documents on segregated housing, and in four states we reviewed placement policies. For four of the five selected states, we reviewed relevant reports on their segregated housing unit conditions for context. While conducting site visits to BOP prisons in Kansas and Colorado, we also visited state correctional facilities in those two states. We interviewed corrections officials at these facilities and the other states regarding reasons for reducing the segregated housing unit population and any reported impact of the segregated housing unit reforms on institutional safety. While the reports and results from our interviews are not representative, they provided us with perspectives on state actions to reduce segregated housing There are dissimilarities between federal and state prison systems— legally and structurally, to name a few––that limit the comparability between federal and state correctional systems. We are unable to generalize about the types of actions other states have taken to reform segregated housing policies and reduce the number of inmates held in segregation and any effects. Nevertheless, the information we obtained through these visits provided examples of state responses to reforming segregation and reducing inmates housed in segregated housing units. We also discussed with BOP officials the state actions we identified. Further, to identify the universe of reports and studies that describe, evaluate, or analyze the impact of segregated housing, including any long-term impacts associated with mental illness, we used a multistaged process. First, we (1) conducted key word searches of criminal justice, legal, and social science research databases; (2) searched academic, nongovernment and stakeholder interest group-related Web sites, such as those of Vera, American Civil Liberties Union (ACLU), and Urban Institute, (3) reviewed bibliographies, published summaries, meta- analyses, and prior GAO reports on segregated housing; and (4) asked academic corrections experts to identify evaluations. Our literature search identified over 150 documents, which included articles, opinion pieces, published reports, and studies related to segregated housing. We further identified studies that compared inmates in segregated housing with inmates in the general population. We reviewed these reports and studies to gain a broader understanding of the potential impacts of segregated housing and of the extent and quality of research available on the subject. We compared BOP’s mechanisms for evaluating the impact of segregated housing units on institutional safety, or the impacts of long- term segregation on inmates, with BOP’s policies and mission statements. We conducted this performance audit from January 2012 to April 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions given our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions for our audit objectives. BOP established CMUs in 2006 and 2008, in two institutions to house inmates who require increased monitoring of their communications with the public to protect the safety, security, and orderly operation of BOP facilities and the public. Inmates in CMUs have 100 percent of their communications monitored by BOP officials and are allowed only noncontact visits with family and friends. According to each prison’s institution supplement guidelines, CMUs are self-contained general population housing units in which inmates reside; eat; and participate in all educational, religious, visiting, unit management, and work programming in the unit, similar to general population inmates. From fiscal year 2008 to February 2013, the total CMU population increased from 64 inmates to 81 inmates. See figure 11 for an overview of CMUs. According to a BOP memorandum, BOP places inmates in CMUs for several reasons, including conviction, conduct or involvement related to international or domestic terrorism, and commission of prohibited activity related to misuse or abuse of approved communication methods while incarcerated, or for other reasons. Inmates referred to CMUs do not receive a hearing prior to placement in CMUs. According to the prison’s institution supplement guidelines, an inmate assigned to a CMU is to receive a notice of transfer to the CMU within 5 days of arrival in the unit, including reasons for placement and notice of the right to appeal the transfer through the administrative remedy process. At the institution, prison officials are to review the CMU inmate’s status every 6 months, according to BOP’s national policy that applies to all inmates in BOP custody. The guidelines also call for prison officials to regularly review an inmate’s readiness to be transferred out of a CMU by examining a number of factors, including programming needs and if the original reasons for CMU placement still exist. After conducting the review, prison officials may recommend to the warden that an inmate be transferred out of the CMU. All CMU inmates are segregated from the general population in self- contained housing units to regulate and monitor their communications with persons in the community. However, they are allowed to congregate outside their cells, but within these self-contained housing units, for 15 to 16 hours per day like inmates in the general population. Inmates in CMUs require 100 percent live monitoring of their telephone calls and social visits, and a review of their incoming and outgoing social mail. All telephone calls and social visits are also recorded, and they must occur in English only, unless the call is previously scheduled and conducted through simultaneous translation monitoring. Other than increased communications monitoring, BOP officials stated that conditions of confinement in these units are the same as conditions of confinement for inmates in other medium security general population housing units. This includes (1) access to medical and mental health services; (2) meals that meet inmate dietary requirements served in common dining areas; (3) access to recreation and leisure in a common area daily up to 16 hours per day, including table games, television in the common areas, and some aerobic exercise equipment; (4) religious service opportunities; and (5) access to law library services. Also, like general population housing, each CMU contains a SHU dedicated to housing inmates in need of being placed in SHU- administrative detention or SHU-disciplinary segregation status. See figures 12 and 13 for photographs of a CMU. As previously discussed, BOP headquarters has a mechanism in place to centrally monitor how prisons implement most housing unit policies, but the degree of monitoring varies depending upon the type of housing. In addition, we reviewed PRD monitoring reports, assessed how PRD conducted monitoring at one of the two prisons with CMUs, and conducted an independent analysis of BOP compliance at these two prisons. At one of the two prisons with CMUs we visited, we observed that PRD checked compliance with general prison policies, as well as SHU-specific policies, but PRD does not have requirements to monitor CMU-specific policies. CMU inmate files may be included in any PRD program review that covers the entire prison complex. According to BOP officials, although not required, BOP may randomly select some CMU inmate files as part of the prison complex during periodic PRD reviews.PRD does not have requirements to monitor CMU-specific policies found in the institution supplement guidelines. According to BOP officials, additional monitoring for CMUs is not required because they do not have the same kinds of restrictive conditions of confinement that are the subject of SHU- and SMU-specific monitoring steps. As part of our review of PRD monitoring reports, we found that 8 of the 45 monitoring reports covered these two prisons with CMUs. PRD found that these prisons were in general compliance with BOP policies, and none of these PRD monitoring reports identified any findings or deficiencies specific to the CMUs. To assess how PRD staff conducted monitoring at one of these prisons, we observed PRD conduct reviews at the CMUs in accordance with PRD guidelines. In light of our observations, we found that PRD staff (1) performed monitoring rounds at CMUs, (2) reviewed log books, and (3) reviewed inmate files, to determine if the prisons followed the required procedural protections steps. In addition, we also conducted an independent analysis of BOP compliance with CMU-specific policies at the two prisons with CMUs. Specifically, we reviewed a total of 10 files for inmates held in CMUs for fiscal years 2011 and 2012 at these two facilities. We found that all 10 inmate case files we analyzed provided reasons for inmate placement in CMUs, as required by BOP institution supplements. However, similar to the documentation problems we noted in the body of the report, we found documentation deficiencies during our review of the CMU files. For example, 2 out of the 10 inmate case files we reviewed did not include documentation that unit team staff regularly monitored the inmate’s CMU status every 6 months and ensured that inmates were afforded their rights to programming activities. Without complete documentation, BOP headquarters cannot be assured that inmates in CMUs are receiving the procedural protections and conditions of confinement to which they are entitled, as stated in BOP policy and institution supplements. BOP has segregated housing units in prisons located throughout the country. For example, BOP has SHUs in 109 out of its 119 facilities. Three facilities have SMUs. See figure 14 for a map of the locations of each type of segregated housing unit. According to BOP, the length of stay inmates serve in segregated housing units varies, and BOP does not track an inmate’s total length of stay or establish a maximum length of stay for inmates in any type of segregated housing unit. An inmate’s length of stay in segregated housing varies depending on the inmate’s program needs and status, reason for placement, and behavior while in the unit. BOP policy provides the expected length of stay for some segregated housing units. For example, according to BOP officials, placement of inmates in SHUs is intended to be temporary. Inmates may be sanctioned to 1 to 18 months in a SHU for disciplinary reasons, given the severity of infraction. Also, BOP policy states inmates placed in SMUs, the ADX Step Down Units, and ADX Special Security Unit may participate in structured, phased programs where they can progress or “step down” to general population after approximately 18 to 36 months if they maintain good behavior. However, according to BOP officials, an inmate may remain in any of the segregated housing units if the inmate continues to be disruptive or BOP officials determine through the review process that the inmate’s original reason for placement still exists. In addition to the contact named above, Ned George, Assistant Director; Pedro Almoguera; Lori Achman; Carla Brown; Jennifer Bryant; Frances Cook; Michele Fejfar; Eric Hauswirth; Lara Miklozek; Linda Miller; Jessica Orr, Meghan Squires; Helene Toiv; and Yee Wong made key contributions to this report.
BOP confines about 7 percent of its 217,000 inmates in segregated housing units for about 23 hours a day. Inmates are held in SHUs, SMUs, and ADX. GAO was asked to review BOP's segregated housing unit practices. This report addresses, among other things: (1) the trends in BOP's segregated housing population, (2) the extent to which BOP centrally monitors how prisons apply segregated housing policies, and (3) the extent to which BOP assessed the impact of segregated housing on institutional safety and inmates. GAO analyzed BOP's policies for compliance and analyzed population trends from fiscal year 2008 through February 2013. GAO visited six federal prisons selected for different segregated housing units and security levels, and reviewed 61 inmate case files and 45 monitoring reports. The results are not generalizable, but provide information on segregated housing units. The overall number of inmates in the Bureau of Prisons' (BOP) three main types of segregated housing units--Special Housing Units (SHU), Special Management Units (SMU), and Administrative Maximum (ADX)--increased at a faster rate than the general inmate population. Inmates may be placed in SHUs for administrative reasons, such as pending transfer to another prison, and for disciplinary reasons, such as violating prison rules; SMUs, a four-phased program in which inmates can progress from more to less restrictive conditions; or ADX, for inmates that require the highest level of security. From fiscal year 2008 through February 2013, the total inmate population in segregated housing units increased approximately 17 percent--from 10,659 to 12,460 inmates. By comparison, the total inmate population in BOP facilities increased by about 6 percent during this period. BOP has a mechanism to centrally monitor segregated housing, but the degree of monitoring varies by unit type and GAO found incomplete documentation of monitoring at select prisons. BOP headquarters lacks the same degree of oversight of ADX-specific conditions of confinement compared with SHUs and SMUs partly because ADX policies are monitored locally by ADX officials. Developing specific requirements for ADX could provide BOP with additional assurance that inmates held at ADX are afforded their minimum conditions of confinement and procedural protections. According to a selection of monitoring reports and inmate case files, GAO also identified documentation concerns related to conditions of confinement and procedural protections, such as ensuring that inmates received all their meals and exercise as required. According to BOP officials, in December 2012, all SHUs and SMUs began using a new software program that could improve the ability to document conditions of confinement in SHUs and SMUs. However, BOP officials acknowledged the recently implemented software program may not address all the deficiencies GAO identified. Since BOP could not provide evidence that it addressed the documentation deficiencies, GAO cannot determine if it will mitigate the documentation concerns. BOP expects to complete a review of the new software program by approximately September 30, 2013, which should help determine the extent to which the software program addresses documentation deficiencies GAO identified. BOP has not assessed the impact of segregated housing on institutional safety or the impacts of long-term segregation on inmates. In January 2013, BOP authorized a study of segregated housing; however, it is unclear to what extent the study will assess the extent to which segregated housing units contribute to institutional safety. As of January 2013, BOP is considering conducting mental health case reviews for inmates held in SHUs or ADX for more than 12 continuous months. However, without an assessment of the impact of segregation on institutional safety or study of the long-term impact of segregated housing on inmates, BOP cannot determine the extent to which segregated housing achieves its stated purpose to protect inmates, staff and the general public. GAO recommends that BOP (1) develop ADX-specific monitoring requirements; (2) develop a plan that clarifies how BOP will address documentation concerns GAO identified, through the new software program; (3) ensure that any current study to assess segregated housing also includes reviews of its impact on institutional safety; and (4) assess the impact of long-term segregation. BOP agreed with these recommendations and reported it would take actions to address them.
Like financial institutions, credit card companies, telecommunications firms, and other private sector companies that take steps to protect customers’ accounts, CMS uses information technology to help detect cases of improper claims and payments. For more than a decade, the agency and its contractors have used automated software tools to analyze data from various sources to detect patterns of unusual activities or financial transactions that indicate payments could have been made for fraudulent charges or improper payments. For example, to identify unusual billing patterns and support investigations and prosecutions of cases, analysts and investigators access information about key actions taken to process claims as they are filed and the specific details about claims already paid. This would include information on claims as they are billed, adjusted, and paid or denied; check numbers on payments of claims; and other specific information that could help establish provider intent. CMS uses many different means to store and manipulate data and, since the establishment of the agency’s program integrity initiatives in the 1990s, has built multiple, disparate databases and analytical software tools to meet the individual and unique needs of various programs within the agency. In addition, data on Medicaid claims are stored by the states in multiple systems and databases, and are not readily available to CMS. According to agency program documentation, these geographically distributed, regional approaches to data storage result in duplicate data and limit the agency’s ability to conduct analyses of data on a nationwide basis. As a result, CMS has been working for most of the past decade to consolidate its databases and analytical tools. In 2006, CMS officials expanded the scope of a 3-year-old data modernization strategy to not only modernize data storage technology, but also to integrate Medicare and Medicaid data into a centralized repository so that CMS and its partners could access the data from a single source. They called the expanded program IDR. According to program officials, the agency’s vision was for IDR to become the single repository for CMS’s data and enable data analysis within and across programs. Specifically, this repository was to establish the infrastructure for storing data related to Medicaid and Medicare Parts A, B, and D claims processing, as well as a variety of other agency functions, such as program management, research, analytics, and business intelligence. CMS envisioned an incremental approach to incorporating data into IDR. Specifically, it intended to incorporate data related to paid claims for all Medicare Part D data by the end of fiscal year 2006, and for Medicare Parts A and B data by the end of fiscal year 2007. The agency also planned to begin to incrementally add all Medicaid data for the 50 states in fiscal year 2009 and to complete this effort by the end of fiscal year 2012. Initial program plans and schedules also included the incorporation of additional data from legacy CMS claims-processing systems that store and process data related to the entry, correction, and adjustment of claims as they are being processed, along with detailed financial data related to paid claims. According to program officials, these data, called “shared systems” data, are needed to support the agency’s plans to incorporate tools to conduct predictive analysis of claims as they are being processed, helping to prevent improper payments. Shared systems data, such as check numbers and amounts related to claims that have been paid, are also needed by law enforcement agencies to help with fraud investigations. CMS initially planned to have all the shared systems data included in IDR by July 2008. Also in 2006, CMS initiated the One PI program with the intention of developing and implementing a portal and software tools that would enable access to and analysis of claims, provider, and beneficiary data from a centralized source. The agency’s goal for One PI was to support the needs of a broad program integrity user community, including agency program integrity personnel and contractors who analyze Medicare claims data, along with state agencies that monitor Medicaid claims. To achieve its goal, agency officials planned to implement a tool set that would provide a single source of information to enable consistent, reliable, and timely analyses and improve the agency’s ability to detect fraud, waste, and abuse. These tools were to be used to gather data from IDR about beneficiaries, providers, and procedures and, combined with other data, find billing aberrancies or outliers. For example, an analyst could use software tools to identify potentially fraudulent trends in ambulance services by gathering the data about claims for ambulance services and medical treatments, and then use other software to determine associations between the two types of services. If the analyst found claims for ambulance travel costs but no corresponding claims for medical treatment, it might indicate that further investigation could prove that the billings for those services were fraudulent. According to agency program planning documentation, the One PI system was also to be developed incrementally to provide access to IDR data, analytical tools, and portal functionality. CMS planned to implement the One PI portal and two analytical tools for use by program integrity analysts on a widespread basis by the end of fiscal year 2009. The agency engaged contractors to develop the system. IDR has been in use by CMS and contractor program integrity analysts since September 2006 and currently incorporates data related to claims for reimbursement of services under Medicare Parts A, B, and D. According to program officials, the integration of these data into IDR established a centralized source of data previously accessed from multiple disparate system files. However, although the agency has been incorporating data from various sources since 2006, IDR does not yet include all the data that were planned to be incorporated by the end of 2010 and that are needed to support enhanced program integrity initiatives. Specifically, although initial program integrity requirements included the incorporation of the shared systems data by July 2008, these data have not yet been added to IDR. As such, analysts are not able to access certain data from IDR that would help them identify and prevent payment of fraudulent claims. According to IDR program officials, the shared systems data were not incorporated as planned because funding for the development of the software and acquisition of the hardware needed to meet this requirement was not approved until the summer of 2010. Since then, IDR program officials have developed project plans and identified user requirements, and told us that they plan to incorporate shared systems data by November 2011. In addition, IDR does not yet include the Medicaid data that are critical to analysts’ ability to detect fraud, waste, and abuse in this program. While program officials initially planned to incorporate 20 states’ Medicaid data into IDR by the end of fiscal year 2010, the agency had not incorporated any of these data into the repository as of May 25, 2011. Program officials told us that the original plans and schedules for obtaining Medicaid data did not account for the lack of funding for states to provide Medicaid data to CMS, or the variations in the types and formats of data stored in disparate state Medicaid systems. Consequently, the officials were not able to collect the data from the states as easily as they expected and did not complete this activity as originally planned. In December 2009, CMS initiated another agencywide program intended to, among other things, identify ways to collect Medicaid data from the many disparate state systems and incorporate the data into a single data store. As envisioned by CMS, this program, the Medicaid and Children’s Health Insurance Program Business Information and Solutions (MACBIS) program, is to include activities in addition to providing expedited access to current data from state Medicaid programs. According to agency planning documentation, as a result of efforts to be initiated under the MACBIS program, CMS expects to incorporate Medicaid data for all 50 states into IDR by the end of fiscal year 2014. This enterprisewide initiative is expected to cost about $400 million through fiscal year 2016. However, program officials have not defined plans and reliable schedules for incorporating the additional data into IDR that are needed to support the agency’s program integrity goals. Yet, doing so is essential to ensuring that CMS does not repeat mistakes of the past that stand to jeopardize the overall success of its current efforts. In this regard, more than a decade ago, we reported on the agency’s efforts to replace multiple claims processing systems with a single, unified system. Among other things, that system was intended to provide an integrated database to help the agency in identifying fraud and abuse. However, as the system was being developed, we reported repeatedly that the agency was not applying effective investment management practices to its planning and management of the project. Further, we reported that the agency had no assurance that the project would be cost-effective, delivered within estimated timeframes, or even improve the processing of Medicare claims. Lacking these vital project management elements, CMS subsequently halted that troubled initiative without delivering the intended system—after investing more than $80 million over 3-and-a-half years. Until the agency defines plans and reliable schedules for incorporating the additional data into IDR, it cannot ensure that current development, implementation, and deployment efforts will provide the data and technical capabilities needed to enhance CMS’s efforts to detect potential cases of fraud, waste, and abuse. Beyond the IDR initiative, CMS program integrity officials have not yet taken appropriate actions to ensure the use of One PI on a widespread basis for program integrity purposes. According to program officials, the system was deployed in September 2009 as originally planned and consisted of a portal that provided Web-based access to software tools used by CMS and contractor analysts to retrieve and analyze data stored in IDR. As currently implemented, the system provides access to two analytical tools. One tool is a commercial off-the-shelf decision support tool that is used to perform data analysis to, for example, detect patterns of activities that may identify or confirm suspected cases of fraud, waste, or abuse. The second tool provides users with extended capabilities to perform more complex analyses of data. For example, it allows the user to customize and create ad hoc queries of claims data across the different parts of the Medicare program. However, while program officials deployed the One PI portal and two analytical tools, the system is not being used as widely as planned because CMS and contractor analysts have not received the necessary training for its use. In this regard, program planning documentation from August 2009 indicated that One PI program officials had planned for 639 analysts to be trained and using the system by the end of fiscal year 2010; however, CMS confirmed that by the end of October 2010, only 42 of those intended users had been trained to use One PI, with 41 actively using the portal and tools. These users represent fewer than 7 percent of the users originally intended for the program. Program officials responsible for implementing the system acknowledged that their initial training plans and efforts had been insufficient and that they had consequently initiated activities and redirected resources to redesign the One PI training plan in April 2010; they began to implement the new training program in July of that year. As of May 25, 2011, One PI officials told us that 62 additional analysts had signed up to be trained in 2011 and that the number of training classes for One PI had been increased from two to four per month. Agency officials, in commenting on our report, stated that since January 2011, 58 new users had been trained; however, they did not identify an increase in the number of actual users of the system. Nonetheless, while these activities indicate some progress toward increasing the number of One PI users, the number of users expected to be trained and to begin using the system represents a small fraction of the population of 639 intended users. Moreover, as of late May 2011, One PI program officials had not yet made detailed plans and developed schedules for completing training of all the intended users. Agency officials concurred with our conclusion that CMS needs to take more aggressive steps to ensure that its broad community of analysts is trained. Until it does so, the use of One PI may remain limited to a much smaller group of users than the agency intended, and CMS will continue to face obstacles in its efforts to deploy One PI for widespread use throughout its community of program integrity analysts. Because IDR and One PI are not being used as planned, CMS officials are not yet in a position to determine the extent to which the systems are providing financial benefits or supporting the agency’s initiatives to meet program integrity goals and objectives. As we have reported, agencies should forecast expected benefits and then measure actual financial benefits accrued through the implementation of information technology programs. Further, the Office of Management and Budget (OMB) requires agencies to report progress against performance measures and targets for meeting them that reflect the goals and objectives of the programs. To do this, performance measures should be outcome-based and developed with stakeholder input, and program performance must be monitored, measured, and compared to expected results so that agency officials are able to determine the extent to which goals and objectives are being met. In addition, industry experts describe the need for performance measures to be developed with stakeholders’ input early in a project’s planning process to provide a central management and planning tool and to monitor the performance of the project against plans and stakeholders’ needs. While CMS has shown some progress toward meeting the programs’ goals of providing a centralized data repository and enhanced analytical capabilities for detecting improper payments due to fraud, waste, and abuse, the current implementation of IDR and One PI does not position the agency to identify, measure, and track financial benefits realized from reductions in improper payments as a result of the implementation of either system. For example, program officials stated that they had developed estimates of financial benefits expected to be realized through the use of IDR. The most recent projection of total financial benefits was reported to be $187 million, based on estimates of the amount of improper payments the agency expected to recover as a result of analyzing data provided by IDR. With estimated life-cycle program costs of $90 million through fiscal year 2018, the resulting net benefit expected from implementing IDR was projected to be $97 million. However, as of March 2011, program officials had not identified actual financial benefits of implementing IDR. Further, program officials’ projection of financial benefits expected as a result of implementing One PI was most recently reported to be approximately $21 billion. This estimate was increased from initial expectations based on assumptions that accelerated plans to integrate Medicare and Medicaid data into IDR would enable One PI users to identify increasing numbers of improper payments sooner than previously estimated, thus allowing the agency to recover more funds that have been lost due to payment errors. However, the current implementation of One PI has not yet produced outcomes that position the agency to identify or measure financial benefits. CMS officials stated at the end of fiscal year 2010—more than a year after deploying One PI—that it was too early to determine whether the program has provided any financial benefits. They explained that, since the program had not met its goal for widespread use of One PI, there were not enough data available to quantify financial benefits attributable to the use of the system. These officials said that as the user community is expanded, they expect to be able to begin to identify and measure financial and other benefits of using the system. In addition, program officials have not developed and tracked outcome- based performance measures to help ensure that efforts to implement One PI and IDR meet the agency’s goals and objectives for improving the results of its program integrity initiatives. For example, outcome-based measures for the programs would indicate improvements to the agency’s ability to recover funds lost because of improper payments of fraudulent claims. However, while program officials defined and reported to OMB performance targets for IDR related to some of the program’s goals, they do not reflect the goal of the program to provide a single source of Medicare and Medicaid data that supports enhanced program integrity efforts. Additionally, CMS officials have not developed quantifiable measures for meeting the One PI program’s goals. For example, performance measures and targets for One PI include increases in the detection of improper payments for Medicare Parts A and B claims. However, the limited use of the system has not generated enough data to quantify the amount of funds recovered from improper payments. Because it lacks meaningful outcome-based performance measures and sufficient data for tracking progress toward meeting performance targets, CMS does not have the information needed to ensure that the systems are useful to the extent that benefits realized from their implementation help the agency meet program integrity goals. Further, until CMS is better positioned to identify and measure financial benefits and establishes outcome-based performance measures to help gauge progress toward meeting program integrity goals, it cannot be assured that the systems will contribute to improvements in CMS’s ability to detect fraud, waste, and abuse in the Medicare and Medicaid programs, and prevent or recover billions of dollars lost to improper payments of claims. Given the critical need for CMS to improve the management of and reduce improper payments within the Medicare and Medicaid programs, our report being released today recommends a number of actions that we consider vital to helping CMS achieve more widespread use of IDR and One PI for program integrity purposes. Specifically, we are recommending that the Administrator of CMS finalize plans and develop schedules for incorporating additional data into IDR that identify all resources and activities needed to complete tasks and that consider risks and obstacles to the IDR program; implement and manage plans for incorporating data in IDR to meet schedule milestones;  establish plans and reliable schedules for training all program integrity analysts intended to use One PI;  establish and communicate deadlines for program integrity contractors to complete training and use One PI in their work; conduct training in accordance with plans and established deadlines to ensure schedules are met and program integrity contractors are trained and able to meet requirements for using One PI;  define any measurable financial benefits expected from the implementation of IDR and One PI; and  with stakeholder input, establish measurable, outcome-based performance measures for IDR and One PI that gauge progress toward meeting program goals. In commenting on a draft of our report, CMS agreed with these recommendations and indicated that it plans to take steps to address the challenges and problems that we identified during our study. In summary, CMS’s success toward meeting its goals to enhance program integrity will depend upon the agency’s incorporation of all needed data into IDR as well as the effective use of the systems by the agency’s broad community of program integrity analysts. In addition, a vital step will be the identification of measurable financial benefits and performance goals expected to be attained through improvements in the agency’s ability to prevent and detect fraudulent, wasteful, and abusive claims and resulting improper payments. In taking these steps, the agency will better position itself to determine whether these systems are useful for enhancing CMS’s ability to identify fraud, waste, and abuse and, consequently, reduce the loss of funds resulting from improper payments of Medicare and Medicaid claims. Mr. Chairman, this concludes my prepared statement. I would be pleased to answer any questions you or other Members of the Subcommittee may have. If you have questions concerning this statement, please contact Joel C. Willemssen, Managing Director, Information Technology Team, at (202) 512-6253 or [email protected]; or Valerie C. Melvin, Director, Information Management and Human Capital Issues, at (202) 512-6304 or [email protected]. Other individuals who made key contributions include Teresa F. Tucker (Assistant Director), Sheila K. Avruch (Assistant Director), April W. Brantley, Clayton Brisson, Neil J. Doherty, Amanda C. Gill, Nancy Glover, Kendrick M. Johnson, Lee A. McCracken, Terry L. Richardson, Karen A. Richey, and Stacey L. Steele. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
This testimony discusses the Centers for Medicare and Medicaid Services' (CMS) efforts to protect the integrity of the Medicare and Medicaid programs, particularly through the use of information technology to help improve the detection of fraud, waste, and abuse in these programs. CMS is responsible for administering the Medicare and Medicaid programs and leading efforts to reduce improper payments of claims for medical treatment, services, and equipment. Improper payments are overpayments or underpayments that should not have been made or were made in an incorrect amount; they may be due to errors, such as the inadvertent submission of duplicate claims for the same service, or misconduct, such as fraud or abuse. The Department of Health and Human Services reported about $70 billion in improper payments in the Medicare and Medicaid programs in fiscal year 2010. Operating within the Department of Health and Human Services, CMS conducts reviews to prevent improper payments before claims are paid and to detect claims that were paid in error. These activities are predominantly carried out by contractors who, along with CMS personnel, use various information technology solutions to consolidate and analyze data to help identify the improper payment of claims. For example, these program integrity analysts may use software tools to access data about claims and then use those data to identify patterns of unusual activities by matching services with patients' diagnoses. In 2006, CMS initiated activities to centralize and make more accessible the data needed to conduct these analyses and to improve the analytical tools available to its own and contractor analysts. At the Subcommittee's request, we have been reviewing two of these initiatives--the Integrated Data Repository (IDR), which is intended to provide a single source of data related to Medicare and Medicaid claims, and the One Program Integrity (One PI) system, a Web-based portal and suite of analytical software tools used to extract data from IDR and enable complex analyses of these data. According to CMS officials responsible for developing and implementing IDR and One PI, the agency had spent approximately $161 million on these initiatives by the end of fiscal year 2010. This testimony, in conjunction with a report that we are releasing today, summarizes the results of our study--which specifically assessed the extent to which IDR and One PI have been developed and implemented and CMS's progress toward achieving its goals and objectives for using these systems to detect fraud, waste, and abuse. In 2006, CMS initiated the One PI program with the intention of developing and implementing a portal and software tools that would enable access to and analysis of claims, provider, and beneficiary data from a centralized source. The agency's goal for One PI was to support the needs of a broad program integrity user community, including agency program integrity personnel and contractors who analyze Medicare claims data, along with state agencies that monitor Medicaid claims. To achieve its goal, agency officials planned to implement a tool set that would provide a single source of information to enable consistent, reliable, and timely analyses and improve the agency's ability to detect fraud, waste, and abuse. These tools were to be used to gather data from IDR about beneficiaries, providers, and procedures and, combined with other data, find billing aberrancies or outliers. For example, an analyst could use software tools to identify potentially fraudulent trends in ambulance services by gathering the data about claims for ambulance services and medical treatments, and then use other software to determine associations between the two types of services. If the analyst found claims for ambulance travel costs but no corresponding claims for medical treatment, it might indicate that further investigation could prove that the billings for those services were fraudulent. According to agency program planning documentation, the One PI system was also to be developed incrementally to provide access to IDR data, analytical tools, and portal functionality. CMS planned to implement the One PI portal and two analytical tools for use by program integrity analysts on a widespread basis by the end of fiscal year 2009. The agency engaged contractors to develop the system. While CMS has shown some progress toward meeting the programs' goals of providing a centralized data repository and enhanced analytical capabilities for detecting improper payments due to fraud, waste, and abuse, the current implementation of IDR and One PI does not position the agency to identify, measure, and track financial benefits realized from reductions in improper payments as a result of the implementation of either system. For example, program officials stated that they had developed estimates of financial benefits expected to be realized through the use of IDR. The most recent projection of total financial benefits was reported to be $187 million, based on estimates of the amount of improper payments the agency expected to recover as a result of analyzing data provided by IDR. With estimated life-cycle program costs of $90 million through fiscal year 2018, the resulting net benefit expected from implementing IDR was projected to be $97 million. However, as of March 2011, program officials had not identified actual financial benefits of implementing IDR.
A complete and accurate address list is the cornerstone of a successful census, because it identifies all living quarters that are to receive a census questionnaire and serves as the control mechanism for following up with living quarters that do not respond. If the address list is inaccurate, people can be missed, counted more than once, or included in the wrong locations. MAF is intended to be a complete and current list of all addresses and locations where people live or potentially live. The Topographically Integrated Geographic Encoding and Referencing (TIGER) database is a mapping system that identifies all visible geographic features, such as type and location of streets, housing units, rivers, and railroads. The Bureau’s approach to building complete and accurate address lists and maps consists of a series of operations that sometimes overlap and are conducted over several years. These operations include partnerships with the U.S. Postal Service and other federal agencies; state, local, and tribal governments; local planning organizations; the private sector; and nongovernmental entities. One such operation is the Bureau’s LUCA Program. The LUCA Program is mandated by the Census Address List Improvement Act of 1994 that expanded the methods the Bureau uses to exchange information with tribal, state, and local governments in order to support its overall residential address list development and improvement process. The LUCA Program is a decennial census geographic partnership program that allows participants to contribute to complete enumeration of their jurisdictions by reviewing, commenting on, and providing updated information on the list of addresses and maps that the Bureau will use to deliver questionnaires within those communities. The LUCA Program was first implemented for the 2000 Census; under the program, the Bureau is authorized (prior to the decennial census) to share individual residential addresses with officials of tribal, state, and local governments who agreed to protect the confidentiality of the information. According to Bureau officials, one reason that participation in the LUCA Program is important is that local government officials may be better positioned to identify some housing units that are hard to find or are hidden because of their knowledge of or access to data in their jurisdictions. For example, local governments may have alternate sources of address information (such as utility bills, tax records, information from housing or zoning officials, or 911 emergency systems), which can help the Bureau build a complete and accurate address list. In addition, according to Bureau officials, providing local governments with opportunities to actively participate in the development of the MAF/TIGER database can have the added benefit for the Bureau of building local governments’ understanding of and support for the census. Local governments have key roles in ensuring a successful census—not just in developing the address list, but during subsequent operations as well, especially those designed to boost public participation in the census. The LUCA Program was first implemented for the 2000 Census, and of the 39,051 eligible entities—such as cities and counties—for the 2000 LUCA Program, 18,333 (47 percent) agreed to participate. Subsequently, for 2010, the Bureau has sent LUCA advance notification letters to approximately 40,000 entities and has set a participation goal of 60 percent. After localities that opted to participate in the LUCA Program have submitted their updated maps and address lists, the Bureau conducts a field check called address canvassing. At that time, the address canvassers—using handheld computers equipped with a global positioning system (GPS)—will go door to door updating the Census 2010 address list, verifing the information localities provided the Bureau during the LUCA Program, adding any additional addresses they find, and making other needed corrections to the address list and maps. The address canvassing operation will ensure that all addresses submitted during the LUCA Program actually exist and that they are assigned to the correct census block. In preparation for the 2010 Census, both the LUCA Program and the subsequent address canvassing operation will be tested as part of the Bureau’s Dress Rehearsal. The 2008 Census Dress Rehearsal is taking place in San Joaquin County, California, and nine counties in the Fayetteville, North Carolina area (see figs. 1 and 2). The Bureau states that the Dress Rehearsal will help ensure a more accurate and cost-effective 2010 Census by demonstrating the methods to be used in the nation’s decennial headcount, and that the main goal of the Dress Rehearsal is to fine-tune the various operations planned for the decennial census in 2010 under as close to census-like conditions as possible. According to the Bureau, the Dress Rehearsal sites provide a comprehensive environment for demonstrating and refining planned 2010 Census operations and activities, such as the use of GPS-equipped handheld computers. This report is the latest of several studies we have issued on the 2010 Census. See Related GAO Products at the end of this report for a list of selected products we have issued to date. The Bureau has completed nearly all planned operations for the LUCA Dress Rehearsal in accordance with the LUCA Dress Rehearsal timeline (see fig. 3). The only components that are not yet completed are address canvassing (which is scheduled to take place from April through June 2007) and the Dress Rehearsal participants’ review of feedback materials regarding their submissions (which is scheduled to take place from December 2007 through January 2008). The Bureau met the first date on its timeline when it sent out the LUCA advance notification letters and informational materials to the highest elected officials in February 2006. The Bureau sent out the official invitation to localities, provided participant training, and shipped LUCA materials on schedule. Additionally, localities reviewed and updated LUCA materials within the June to October 2006 period specified on the timeline. Most recently, the Bureau finished its review of participants’ LUCA submissions and updated the MAF/TIGER geographic database in December 2006. Bureau officials state that they expect to meet the dates on the timeline for the remaining component—address canvassing. It is important to note that while the Bureau met the time frames listed in its published LUCA Dress Rehearsal timeline, some activities were not included in that timeline. For example, plans to test the newly developed MTPS (which is intended to assist participating localities in their 2010 LUCA reviews) and test the new computer-based LUCA training were not included in the Bureau’s LUCA Dress Rehearsal schedule—precluding the opportunity to test these software products under census-like conditions. The 2010 LUCA Program is now under way. In January and February 2007, the Bureau sent advance notification letters for the 2010 LUCA Program to the highest elected officials in each of the eligible localities. Bureau officials expect to meet the remaining dates listed on the published timeline (see fig. 4). The Bureau has modified the 2010 LUCA Program to address issues stemming from the 2000 experience but faces new challenges with the program. To reduce the workload and burden on LUCA participants, the Bureau provided a longer period for reviewing and updating LUCA materials; provided options to submit materials for the LUCA Program; combined the collection of addresses from two separate operations into one integrated and sequential operation; and created MTPS, which is designed to assist LUCA participants in reviewing and updating address and map data. However, the Bureau tested MTPS with only one potential user for the 2010 LUCA Program, and did not test MTPS with any localities during the LUCA Dress Rehearsal. In addition, many participants experienced problems with converting Bureau-provided address files. Further, the Bureau has planned modified training for the 2010 LUCA Program, but the Bureau did not test each of these modifications in the LUCA Dress Rehearsal. Finally, although the Bureau will likely plan to assess the contribution that the LUCA Program makes to address counts, the Bureau does not have a plan to assess the contribution that the program makes to population counts. Such analysis would provide a measure of the ultimate impact of the LUCA Program on achieving a complete count of the population. Also, the Bureau has not collected the information needed to fully measure LUCA participation rates and is therefore limited in its ability to assess the cost and benefits of the LUCA Program to the Bureau. Without this information, the Bureau may not be able to fully measure the extent to which local review contributed to the MAF database and the census population count. Moreover, an additional improvement to the LUCA Program that the Bureau cited was the agency’s expansion of direct LUCA participation to state governments. The Bureau noted that allowing states to participate directly can fill the gap when local governments do not participate because of a lack of resources or technical challenges. Studies by us, NRC, and others highlighted concerns with the burden and workload placed on participants in the 2000 LUCA Program. In testimony given before the Subcommittee on the Census, House Committee on Government Reform in September 1999, we noted that LUCA may have stretched the resources of local governments and that the workload was greater than most local governments had expected. According to a report contracted by the Bureau, two reasons cited by localities for not participating in the 2000 LUCA Program were the volume of work required and the lack of sufficient personnel to conduct the LUCA review. Recognizing that not all localities have the resources to participate effectively in the LUCA Program within imposed time constraints, the Bureau made several changes to the program. First, the Bureau provided a longer review period for LUCA participants. In 2004, NRC reported on the 2000 LUCA experience and concluded that the Bureau should clearly articulate realistic schedules for the periods when localities can review and update LUCA materials. Concurrently, the Bureau itself recommended that it allow sufficient time for participants to complete LUCA updates before the Bureau begins address canvassing activities. As a result, the Bureau extended the review period for LUCA Program participants from 90 to 120 calendar days. The implementation of the review extension was well received by LUCA Dress Rehearsal participants; the majority of respondents to our survey of LUCA Dress Rehearsal participants indicated that 120 days allowed adequate time to complete the LUCA review (see fig. 5). Second, the Bureau provided localities with options for how they may participate in the LUCA Program, as recommended in a 2002 contractor study of the program. Specifically, the Bureau now provides three options for how localities can submit address and map information to the Bureau: (1) full address list review with count review, (2) Title 13 local address list submission, and (3) non-Title 13 local address list submission (see fig. 6). The three options differ in the level of review of Bureau materials by participating localities and in requirements to adhere to rules concerning confidentiality of information. For options one or two, participants may use MTPS to assist in their reviews. Our survey of LUCA Dress Rehearsal participants found that the majority of localities were satisfied with the participation options provided by the Bureau (see fig. 7). Third, the Bureau combined the collection of addresses from two separate operations for city-style and non-city-style addresses into one integrated and sequential operation. In a 2004 report, NRC suggested that the Bureau coordinate efforts related to the decennial census so that the LUCA Program and other Bureau programs would not be unduly redundant and burdensome to localities. Based on complaints about the multiphased LUCA Program from the 2000 experience (where some participants found the two separate operations confusing), the Bureau designed the 2010 LUCA Program to be a single review operation for all addresses. Bureau officials also told us that the combined LUCA operation would be fully integrated with the decennial census schedule with address canvassing. As a result of the Bureau’s efforts, localities could face a reduced burden, and participation in the 2010 LUCA Program could be less confusing. Further, the Bureau may be able to more effectively verify address information collected from LUCA Program participants during address canvassing. Finally, the Bureau has created MTPS, which is a geographic information system application that will allow LUCA Program participants to update the Bureau’s address list and maps electronically. The application will also enable users to import address lists and maps for comparison to the Bureau’s data and participate in both the LUCA Program and the Boundary and Annexation Survey (BAS) at the same time. The Bureau noted that participants who sign up to participate in the LUCA Program by October 31, 2007, will be allowed to provide their boundary updates with their LUCA updates and thereby avoid having to separately respond to the 2008 BAS. A 2004 study by NRC recommended that the Bureau coordinate efforts so that the LUCA Program, BAS, and other programs are not unduly redundant and burdensome for local and tribal entities. Consistent with that recommendation, the Bureau created MTPS, which Bureau officials said benefits participants by reducing their workloads and burdens in participating in the 2010 LUCA Program by allowing them to review and update address and map information together in one software package. Building on the progress it has already made, the Bureau can take additional steps to address new challenges in reducing workload and burdens for LUCA participants. First, although the Bureau performed internal tests of the software, the Bureau did not test MTPS as part of the LUCA Dress Rehearsal and tested MTPS with only one locality in preparation for the 2010 LUCA Program. Properly executed user-based methods for software testing can give the truest estimate of the extent to which real users can employ a software application effectively, efficiently, and satisfactorily. In addition, multiple users are required to tease out remaining problems in a product that is ready for distribution. The Bureau’s statement of work regarding MTPS specifically required milestones for testing and review of the software by 10 local sites during its development. However, the Bureau’s contract did not specify how many local sites would test the LUCA portion of MTPS. Further, meeting minutes between the Bureau and the MTPS contractor revealed that the contractor did not necessarily plan to test the LUCA portion of MTPS with local users during its development. The Bureau ultimately identified three local sites to test the LUCA portion of MTPS, but only performed the test with one. Of the other two proposed sites, one explicitly canceled testing, and the other did not respond to the Bureau’s attempts at communication. Additionally, Bureau officials told us that user testing for the LUCA Program portions of MTPS was constrained by existing resource limitations and timing issues associated with the schedule for development of MTPS. Bureau officials also informed us that they will provide frequently asked questions regarding MTPS for the LUCA technical help desk. Second, a majority of LUCA Dress Rehearsal participants experienced problems with converting Bureau address files from the Bureau’s format to their own software formats. If participants in the 2010 LUCA Program choose not to use MTPS to update address and map information, they can review and update computer-readable files of census address lists in a pipe-delimited text file format. While the Bureau included instruction for converting files in its LUCA Dress Rehearsal participation guide, it did not include information on specific commonly available types of software that localities are likely to use. Participants in the LUCA Dress Rehearsal experienced problems with converting the files from the Bureau’s format to their respective applications; our survey of LUCA Dress Rehearsal participants revealed that the majority of respondents had, to some extent, problems with file conversions to appropriate formats (see fig. 8). Our fieldwork also revealed issues pertaining to file conversion; for example, one local official noted that it took him 2 days to determine how to convert the Bureau’s pipe-delimited files. To mitigate the potential burden on localities that choose not to use MTPS, the Bureau will provide technical guidance on file conversion through its LUCA technical help desk, but does not plan to provide instructions for converting Bureau- provided address files through other means. At present, the Bureau does not know how many localities will opt not to use MTPS for the 2010 LUCA Program, but those localities may face the same challenges faced by participants in the LUCA Dress Rehearsal. Leading up to the 2000 Census, we reported that LUCA training received less favorable reviews than the other components of the LUCA Program. The 2000 LUCA Program had one training session that encompassed all aspects of the LUCA Program. For the 2010 LUCA Program, the Bureau plans to separate LUCA classroom training into informational and technical training sessions and provide user guides tailored to the participation option chosen by LUCA Program participants. The Bureau provided localities with information on the participation options during the LUCA Dress Rehearsal. However, during the LUCA Dress Rehearsal, the Bureau conducted training sessions that combined promotional and technical components of training because it did not have time to conduct the promotional workshop prior to the LUCA Dress Rehearsal. Consequently, the Bureau was not able to obtain feedback from Dress Rehearsal participants about separating classroom training before the 2010 LUCA Program. Nevertheless, overall respondents to our survey found the LUCA Dress Rehearsal training session useful (see fig. 9). The Bureau plans to further improve the 2010 LUCA Program by offering CBT modules to program participants. Though participants were not provided with CBT in the LUCA Dress Rehearsal, our work has found that this method of training is viewed by participants as helpful. Specifically, respondents to our survey ranked CBT higher than classroom training, in terms of being “extremely” or “very” useful. Additionally, local officials told us that CBT was more convenient for them because they need not leave their offices or adjust their schedules to learn how the LUCA Program works. However, the Bureau’s plans for testing the LUCA CBT include only one user. Properly executed user-based methods of software testing can provide the truest estimate of the extent to which real users can employ an application effectively. The contractor responsible for creating the LUCA CBT was to have provided preliminary versions of the CBT to the Bureau for testing beginning in May 2007—7 months after the end of the LUCA Dress Rehearsal review and 3 months before participants begin reviewing and updating address lists and maps for the 2010 LUCA Program. This timing did not allow the Bureau to test the CBT under census-like conditions, and will leave little time to make any changes before the CBT is distributed to LUCA participants. A 2002 study by a Bureau contractor recommended that the Bureau evaluate the cost and benefits of its LUCA-related activities. An NRC study of the LUCA Program recommended that the Bureau quantify the value of the program in both housing and population terms. The study indicated that quantifying the value of the LUCA Program is useful to show that the cost for the effort is worthwhile and persuade local officials that it is worth their time and resources to become involved in the LUCA Program (for example, by showing how LUCA contributes to a more accurate count of their communities’ populations). The Bureau said that it would mark and evaluate contributions (such as added, corrected, or deleted addresses) of the LUCA Program to the MAF database. The Bureau has not finalized its evaluation plans regarding the 2010 LUCA Program; these plans would include decisions about whether to conduct assessments of the program’s contribution to the census population count. The Bureau also stated that measuring whether the LUCA Program is cost beneficial “has not been a priority” for the agency, given that the program is legally mandated. In addition, Bureau officials stated that they will not budget the LUCA Program separately until fiscal year 2008. They noted that the LUCA Program budget is currently combined with those of other geographic programs in the Decennial Management Division budget. Our work in the area of managing for results has found that federal agencies can use performance information, such as that described above, to make various types of management decisions to improve programs and results. For example, performance information can be used to identify problems in existing programs, identify the causes of problems, develop corrective actions, develop strategies, plan and budget, identify priorities, and make resource allocation decisions to affect programs in the future. Finally, managers can use performance information to identify more effective approaches to program implementation and share those approaches more widely across the agency. One aspect of assessing the LUCA Program is determining the extent to which localities assess Bureau-provided counts, addresses, and maps. However, LUCA Program participation rates are currently difficult to measure because the Bureau does not have a method of tracking localities that agreed to participate in the program but did not submit updates to the Bureau because they found no needed changes to Bureau-provided information. Officials from the Bureau measure LUCA Program participation by whether localities agree to participate in the program, regardless of whether they actually take the time to review the materials the Bureau provides them. Inventory forms used by localities to inform the Bureau of updated LUCA materials do not include an option for localities to indicate whether they reviewed the materials and chose not to provide updates or had not identified any needed changes. This information would allow the Bureau to distinguish between localities that initially agreed to participate but did not and localities that agreed to participate and either did not review the materials or found no changes to submit. The Bureau would then have a unique estimate of localities that found the Bureau’s data to be accurate. Without more precise information on localities that do not provide information, the Bureau cannot fully track localities that actually reviewed materials during participation in the LUCA Program, and therefore cannot ascertain the actual participation rates. More important, without this information, the Bureau cannot fully measure the extent to which local reviews have contributed to accurate address lists and population counts. Hurricane Katrina made landfall in Mississippi and Louisiana on August 29, 2005, and caused $96 billion in property damage—more than any other single natural disaster in the history of the United States. On September 24, 2005, Hurricane Rita followed when it made landfall in Texas and Louisiana and added to the devastation. Still today, the storms’ impact is visible throughout the Gulf Coast region. Hurricane Katrina alone destroyed or made uninhabitable an estimated 300,000 homes. In New Orleans, the hurricanes damaged an estimated 123,000 housing units. The 2010 LUCA Program faces challenges caused by the continuous changes in the housing stock in areas affected by storm damage or population influxes, which may hinder the ability of local governments to accurately update their address lists and maps. Further, the condition of the housing stock is likely to present additional challenges for address canvassing and other decennial census operations in the form of decreased productivity for Bureau staff, issues associated with identifying vacant and uninhabitable structures, and workforce shortages. Early in 2006, based on our prior recommendations, the Bureau chartered a team to assess the impact of the storm damage on its address list and maps for the area. This team (working with other officials from Bureau headquarters and the Dallas Regional Office) proposed several changes to the 2010 LUCA Program and address canvassing in the Gulf Coast region. Officials in the Bureau headquarters and Dallas Regional Office have implemented several of these changes. Many officials of local governments we visited in hurricane-affected areas said they have identified numerous housing units that have been or will be demolished as a result of hurricanes Katrina and Rita and subsequent deterioration. Conversely, many local governments estimate that there is new development of housing units in their respective jurisdictions. The officials we interviewed from localities in the Gulf Coast region indicated that such changes in the housing stock of their jurisdictions are unlikely to subside before local governments begin updating and reviewing materials for the Bureau’s 2010 LUCA Program—in August 2007. Local government officials told us that changes in housing unit stock are often caused by difficulties that families have in deciding whether to return to hurricane- affected areas. Local officials informed us that a family’s decision to return is affected by various factors, such as the availability of insurance; timing of funding from Louisiana’s Road Home Program; lack of availability of contractors; school systems that are closed; and lack of amenities, such as grocery stores. As a result of the still-changing housing unit stock, local governments in hurricane-affected areas may be unable to fully capture reliable information about their address lists before the beginning of the LUCA Program this year or address canvassing in April 2009. Furthermore, operation of local governments themselves has been affected by the hurricanes (see fig. 10). These local governments are focused on reconstruction, and officials we spoke with in two localities questioned their ability to participate in the LUCA Program. The mixed condition of the housing stock in the Gulf Coast region could cause a decrease in productivity rates during address canvassing. During our fieldwork, we found that hurricane-affected areas have many neighborhoods with abandoned and vacant properties mixed in with occupied housing units. Bureau staff conducting address canvassing in these areas may have decreased productivity because of the additional time necessary to distinguish between abandoned, vacant, and occupied housing units. We also observed many areas where lots included a permanent structure with undetermined occupancy, as well as a trailer. Bureau field staff may be presented with the challenge of determining whether a residence or a trailer (see fig. 11), or both, are occupied. Another potential issue is that because of continuing changes in the condition in the housing stock, housing units that are deemed vacant or abandoned during address canvassing may be occupied on Census Day (April 1, 2010). Workforce shortages may also pose significant problems for the Bureau’s hiring efforts for address canvassing. The effects of hurricanes Katrina and Rita caused a major shift in population away from the hurricane-affected areas. This migration displaced many low-wage workers. Should this continue, it could affect the availability of such workers for address canvassing and other decennial census operations. In 2006, we recommended that the Bureau develop plans (prior to the start of the 2010 LUCA Program in August 2007) to assess whether new procedures, additional resources, or local partnerships may be required to update the MAF/TIGER database in the areas affected by hurricanes Katrina and Rita. The Bureau responded to our recommendations by chartering a team to assess the impact of the storm damage on the Bureau’s address lists and maps for areas along the Gulf Coast and develop strategies with the potential to mitigate these impacts. The chartered team recommended that the Bureau consult with state and regional officials (from the Gulf Coast region) on how to make the LUCA Program as successful as possible and hold special LUCA workshops for geographic areas identified by the Bureau as needing additional assistance. In addition to the recommendations made by the Bureau’s chartered team, officials from Bureau headquarters and the Dallas Regional Office proposed steps to address LUCA-related issues in hurricane-affected areas. For example, they proposed that the Bureau provide LUCA training in several areas of Louisiana and Mississippi during promotional workshops for the LUCA Program. Finally, Bureau documentation indicated that the Bureau is considering an “Update/Enumerate” operation to enumerate addresses in the most severely devastated parishes and counties in hurricane-affected areas. The Bureau has implemented several of the proposed changes, cited above, to the 2010 LUCA Program in the Gulf Coast region based on recommendations from its chartered team, other Bureau headquarters officials, and regional office officials. For example, the Bureau conducted conference calls with the states of Louisiana and Mississippi (in October and December 2006, respectively) to discuss the LUCA Program, and had the Dallas and Atlanta regional offices hold additional promotional workshops in hurricane-impacted areas. In addition, Bureau officials have stated that the regional offices will also encourage participants in these areas to sign up for LUCA as early as possible so that if they need more than 120 days for conducting their LUCA review, they can request an extension from the Bureau. In addition to the changes in the 2010 LUCA Program, the Bureau has considered changes to the address canvassing and subsequent operations in the Gulf Coast region. For example, Bureau officials stated that they recognize issues with identifying uninhabitable structures in hurricane- affected zones and, as a result, that they may need to change procedures for address canvassing. The Bureau is still brainstorming ideas, including the possibility of using an “Update/Enumerate” operation in areas along the Gulf Coast. Bureau officials also said that they may adjust training for Bureau staff conducting address canvassing in hurricane-affected areas to help field staff distinguish between abandoned, vacant, and occupied housing units. Without proper training, field staff can make errors and will not operate as efficiently. The Bureau’s plans for how it may adjust address canvassing operations in the Gulf Coast region can also have implications for subsequent operations. For example, instructing field staff to be as inclusive as possible in completing address canvassing could cause increased efforts to follow up on nonrespondents because the Bureau could send questionnaires to housing units that could be vacant on Census Day. In terms of the Bureau’s workforce in the Gulf Coast region, officials from the Bureau’s Dallas Regional Office recognize the potential difficulty of attracting field staff, and have recommended that the Bureau be prepared to pay hourly wage rates for future decennial staff that are considerably higher than usual. Further, Bureau officials noted that the Bureau’s Dallas Regional Office, which has jurisdiction over hurricane- affected areas in Louisiana, Mississippi, and Texas, will examine local unemployment rates to adjust pay rates in the region and use “every single entity” available to advertise for workers in the New Orleans area. However, Bureau officials stated that there are “no concrete plans” to implement changes to address canvassing or subsequent decennial operations in the Gulf Coast region. For instance, Bureau documentation revealed that the Bureau has not yet decided whether to implement “Update/Enumerate” operations in areas along the Gulf Coast. The Bureau has met the time frames for the LUCA Dress Rehearsal and the distribution of advance letters for the 2010 LUCA Program. The Bureau has also taken a number of steps to improve the LUCA Program, including providing a longer review period for program participants, providing localities options for program participation, combining the collection of addresses from two separate operations into one integrated and sequential operation, creating MTPS for participant use in the program, and modifying LUCA training. However, there is more the Bureau can do to address information technology-based challenges to the LUCA Program prior to the 2010 Census and beyond. The Bureau performed little user testing of MTPS and no user testing of the CBT module for the 2010 LUCA Program; however, the Bureau can do more to assess the usability of MTPS and the LUCA CBT. For example, the Bureau could test MTPS and LUCA CBT software with localities before participants begin reviewing and updating materials for the 2010 LUCA Program in August 2007. These tests would help the Bureau identify issues associated with MTPS and LUCA CBT software. Following the tests, the Bureau can provide information on how localities can mitigate such issues via its public Web site and its LUCA technical help desk. Without these tests, localities participating in LUCA 2010 may unnecessarily encounter issues with the CBT software that may otherwise have been identified through testing. The Bureau can also provide additional information, via its public Web site, its LUCA technical help desk, and other means, on converting Bureau address files from the Bureau’s format to specific software applications used by LUCA Program participants in order to mitigate difficulties in file conversion previously identified by LUCA Dress Rehearsal participants. Without such guidance, localities may have difficulty with the file conversion process, creating additional and unnecessary burdens for the localities that choose not to use MTPS. NRC, in its assessment of the LUCA Program, concluded that quantifying the value of the LUCA program is worthwhile, citing for example its use in persuading local officials of the value of participating in the LUCA program. NRC suggests that an evaluation of the LUCA Program consider not only its contributions to address counts but also to population counts. We agree that the Bureau can use such information to measure the LUCA Program’s contribution to the decennial census. In addition, the Bureau is limited in its ability to fully assess the impact of the program because it does not collect information on why localities that agreed to participate do not provide updated information. Without these data, the Bureau cannot determine whether nonresponding localities assessed the Bureau’s information or whether these localities did assess the information but had no changes. Without these data, the Bureau may be hampered in its ability to estimate the impact of the LUCA Program on the MAF database and the census population count. Bureau efforts to consult with state officials and consider changes in decennial census operations, including LUCA, in hurricane-affected areas along the Gulf Coast have helped the Bureau better understand issues associated with implementing these operations in the Gulf Coast region. However, the Bureau can do more to successfully implement address canvassing and other decennial census operations in the Gulf Coast. For example, Bureau efforts to address issues associated with address canvassing, such as adjusting wage rates for future decennial staff, may help the Bureau fulfill staffing requirements for the address canvassing operation (which is scheduled to take place in 2009) and other decennial census operations. Because the changing stock may affect the Bureau’s ability to effectively conduct address canvassing and other operations in the Gulf Coast region, it is important for the Bureau to complete its planning for addressing the challenges that field staff would likely face. In order for the Bureau to address the remaining challenges facing its implementation of the 2010 LUCA Program, we recommend that the Secretary of Commerce direct the Bureau to take the following five actions: Assess potential usability issues with the LUCA Program’s CBT and MTPS by randomly selecting localities in which to test the software packages or by providing alternative means to assess such issues before participants begin reviewing and updating materials for the 2010 LUCA Program in August 2007, and provide information on how localities can mitigate issues identified in such assessments via its public Web site and its LUCA technical help desk. Provide localities not using MTPS, via its public Web site, its LUCA technical help desk, and other appropriate means, instructions on converting files from the Bureau’s format to the appropriate format for software most commonly used by participating localities to update address information. Assess the contribution of the LUCA Program to the final census population counts, as recommended by NRC (to permit an evaluation of the 2010 LUCA Program in preparation for 2020). Establish a process for localities that agreed to participate in the LUCA Program but found no changes in their review to explicitly communicate to the Bureau that they have no changes. Develop strategy, plans and milestones for operations in areas in the Gulf Coast that address the challenges field staff are likely to encounter in conducting address canvassing and subsequent decennial operations in communities affected by the hurricanes. In written comments on a draft of this report, the Bureau generally agreed with our recommendations for the Bureau to assess usability issues with MTPS and CBT; provide localities not using MTPS with instructions on file conversion; assess the contribution of LUCA to the final census population counts; establish a process for localities to indicate that they participated in LUCA but found no changes; and develop strategy, plans, and milestones for operations in the Gulf Coast that address the challenges that field staff are likely to face. The Bureau also agreed with the draft report’s recommendation that the Bureau finalize its plans for conducting the LUCA Program in the areas affected by the hurricanes, noting that its plans were now final. We therefore deleted this recommendation. The Bureau also provided some technical comments and suggestions where additional context might be needed, and we revised the report to reflect these comments as appropriate. The Bureau’s comments are reprinted in their entirety in appendix II. We are sending copies of this report to interested congressional committees and members, the Secretary of Commerce, and the Director of the U.S. Census Bureau. Copies will be made available to others on request. This report will also be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-6806 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. To assess the current status of the U.S. Census Bureau’s (Bureau) Local Update of Census Addresses (LUCA) Program, we requested and obtained source documents from the Bureau’s headquarters in Suitland, Maryland, and the Bureau’s Web site regarding the updated timelines of the 2010 LUCA Program and the LUCA Dress Rehearsal. We also visited the Bureau’s regional office in Charlotte, North Carolina; conducted a phone interview with the Bureau’s regional office in Seattle, Washington; and obtained documents, including the Bureau’s timeline for headquarters and regional office activities associated with the 2010 Census LUCA Program. Additionally, we analyzed the data to determine if the Bureau’s actual timelines met the planned timelines for the LUCA Dress Rehearsal and the 2010 LUCA Program. Additionally, we interviewed officials from the Bureau headquarters in Suitland, Maryland, to determine the extent to which activities associated with the 2010 LUCA Program and LUCA Dress Rehearsal (held June through October 2006) met their timelines. We also visited and obtained documentation from localities associated with the LUCA Dress Rehearsal in California and North Carolina. To assess how the Bureau is addressing prior issues and new challenges associated with implementing the LUCA Program, we performed a review of publications created by GAO and other entities (i.e., the National Research Council, the Department of Commerce’s Office of the Inspector General, and Anteon Corporation) regarding the LUCA Program to ascertain critiques of the program and recommendations for improving the program for the 2010 Census. We also obtained source documents and interviewed officials from the Bureau’s headquarters in Suitland, Maryland, to determine how the Bureau addressed prior issues and new challenges related to the LUCA Program and what modifications the Bureau has made to the 2010 LUCA Program. To determine how the 2010 LUCA Program is being implemented, we undertook fieldwork in 12 localities (in California and North Carolina) that were eligible to participate in the LUCA Dress Rehearsal, which was held from June through October 2006. The 12 localities were selected because they were geographically diverse and varied in population. During our visits to the localities, we interviewed and obtained documentation from local government officials to determine how the Bureau implemented the LUCA Dress Rehearsal and addressed prior issues and new challenges related to the LUCA Program. We also conducted interviews and collected documentation from the Bureau’s regional offices in Charlotte, North Carolina, (in person) and Seattle, Washington, (via telephone) to determine the Bureau’s implementation of the LUCA Dress Rehearsal from the perspective of Bureau officials responsible for the LUCA Dress Rehearsal sites. To obtain further information on the experiences of participants with LUCA Dress Rehearsal activities, we administered a World Wide Web questionnaire accessible through a secure server to 42 local governments participating in the LUCA Dress Rehearsal. We collected data on participants’ experiences with the review process, the census maps and addresses, work materials, and interactions with the Bureau and other agencies. Because this was not a sample survey, it has no sampling errors. However, the practical difficulties of conducting a survey may introduce errors, commonly referred to as nonsampling errors. For example, difficulties in interpreting a particular question, or sources of information available to respondents, can introduce unwanted variability into the survey results. We took steps in developing the questionnaire, collecting the data, and analyzing them to minimize such nonsampling errors. For example, the survey was tested with two LUCA Dress Rehearsal participants in order to check that the questions were clear and unambiguous, the information could be obtained by the respondents, and the questionnaire did not place an undue burden on the respondents. When we analyzed the data, an independent analyst checked all computer programs. Once the questionnaire was finalized, each of the 42 local governments was notified that the questionnaire was available online and provided with a unique password and user name. Therefore, respondents entered their answers directly into the electronic questionnaire, eliminating the need to key data into a database. We included in our study population those local governments in California and North Carolina that participated in the LUCA Dress Rehearsal. We defined participants as those local governments that had signed up to participate and had not later indicated that they in fact did not participate in the LUCA Dress Rehearsal. The Bureau identified 44 state, county, and municipal governments that met our criteria as participating in the LUCA Dress Rehearsal. Questionnaires were sent to 42 local governments and were completed by 31 such governments, for a response rate of 74 percent. There were a total of 62 localities eligible to participate in the LUCA Dress Rehearsal. In addition to our survey, we also performed structured interviews (in person and via telephone) with officials in 7 localities that were eligible to participate in the LUCA Dress Rehearsal but did not take part in the program. To assess how the Bureau is addressing the challenges in areas affected by hurricanes Katrina and Rita that may affect the Bureau’s successful implementation of the 2010 LUCA Program, we undertook fieldwork in eight localities situated in portions of the Gulf Coast region (Louisiana, Mississippi, and Texas) affected by hurricanes Katrina and Rita. We selected these localities because they varied in size and location in the Gulf Coast region. During the fieldwork, we obtained documentation and interviewed officials from each locality about what challenges, if any, the hurricane damage poses to the locality’s successful participation in the 2010 LUCA Program. We obtained source documents and interviewed officials from Bureau headquarters in Suitland, Maryland (in person), and the Bureau regional office in Dallas, Texas (via telephone), about how the Bureau is addressing the aforementioned challenges that are faced by eligible participants in the 2010 LUCA Program in the areas affected by hurricanes Katrina and Rita. We also obtained information, from the sources mentioned above, on the extent to which the Bureau has addressed prior GAO recommendations regarding performing decennial census operations in hurricane-affected areas. We conducted our work from July 2006 through May 2007 in accordance with generally accepted government auditing standards. In addition to the individual named above, Ernie Hazera, Assistant Director; Timothy Wexler; Tom Beall; Michael Carley; Cynthia Cortese; Peter DelToro; Tom James; Andrea Levine; Amanda Miller; Matt Reilly; Mark Ryan; and Michael Volpe made key contributions to this report. Gulf Coast Rebuilding: Preliminary Observations on Progress to Date and Challenges for the Future. GAO-07-574T. Washington, D.C.: April 12, 2007. 2010 Census: Census Bureau Should Refine Recruiting and Hiring Efforts and Enhance Training of Temporary Field Staff. GAO-07-361. Washington, D.C.: April 27, 2007 2010 Census: Redesigned Approach Holds Promise, but Census Bureau Needs to Annually Develop and Provide a Comprehensive Project Plan to Monitor Costs. GAO-06-1009T. Washington, D.C.: July 27, 2006. 2010 Census: Census Bureau Needs to Take Prompt Actions to Resolve Long-standing and Emerging Address and Mapping Challenges. GAO-06- 272. Washington, D.C.: June 15, 2006. 2010 Census: Costs and Risks Must be Closely Monitored and Evaluated with Mitigation Plans in Place. GAO-06-822T. Washington, D.C.: June 6, 2006. 2010 Census: Census Bureau Generally Follows Selected Leading Acquisition Planning Practices, but Continued Management Attentions Is Needed to Help Ensure Success. GAO-06-277. Washington, D.C.: May 18, 2006. 2010 Census: Planning and Testing Activities Are Making Progress. GAO-06-465T. Washington D.C.: March 1, 2006. 2010 Census: Basic Design Has Potential, but Remaining Challenges Need Prompt Resolution. GAO-05-9. Washington, D.C.: January 12, 2005. 2010 Census: Counting Americans Overseas as Part of the Decennial Census Would Not Be Cost-Effective. GAO-04-898. Washington, D.C.: August 19, 2004. 2010 Census: Overseas Enumeration Test Raises Need for Clear Policy Direction. GAO-04-470. Washington, D.C.: May 21, 2004. 2010 Census: Cost and Design Issues Need to Be Addressed Soon. GAO- 04-37. Washington, D.C.: January 15, 2004. Decennial Census: Lessons Learned for Locating and Counting Migrant and Seasonal Farm Workers. GAO-03-605. Washington, D.C.: July 3, 2003. Decennial Census: Methods for Collecting and Reporting Hispanic Subgroup Data Need Refinement. GAO-03-228. Washington, D.C.: January 17, 2003. Decennial Census: Methods for Collecting and Reporting Data on the Homeless and Others without Conventional Housing Need Refinement. GAO-03-227. Washington, D.C.: January 17, 2003. 2000 Census: Lessons Learned for Planning a More Cost-Effective 2010 Census. GAO-03-40. Washington, D.C.: October 31, 2002. 2000 Census: Local Address Review Program Has Had Mixed Results to Date. GAO/T-GGD-99-184. Washington, D.C.: September 29, 1999.
The Department of Commerce's (Commerce) U.S. Census Bureau (Bureau) seeks updated information on the addresses and maps of housing units and group quarters from state, local, and tribal governments through the Local Update of Census Addresses (LUCA) Program. Prepared under the Comptroller General's authority, this report assesses (1) the status of the LUCA Program, (2) the Bureau's response to prior recommendations by GAO and others and new challenges related to the program, and (3) the Bureau's plans for conducting the program in areas affected by hurricanes Katrina and Rita. GAO reviewed LUCA program documents, met with and surveyed participants in the LUCA Dress Rehearsal, and interviewed Bureau officials and local officials in the Gulf Coast region. The Bureau has conducted its planned LUCA operations in accordance with its published timeline. The Bureau has also taken steps to reduce workloads and burdens and improve training for localities that participate in LUCA--all areas GAO and others had identified as needing improvement. For instance, to reduce participant workload and burden, the Bureau provided a longer period for reviewing and updating LUCA materials; provided options for submitting materials for the LUCA Program; combined the collection of LUCA addresses from two separate operations into one integrated program; and created MTPS, which is designed to assist LUCA Program participants in reviewing and updating address and map data. Also, the Bureau has planned improvements to the 2010 LUCA Program training (i.e., specialized workshops for informational and then technical training) and plans to supplement the workshops with CBT. However, the Bureau faces new challenges. For instance, the Bureau tested MTPS with only one local government. Other local officials we spoke with had problems converting Bureau-provided address files. In addition, the Bureau did not test its CBT software in the LUCA Dress Rehearsal. Additional challenges stem from the damage to the Gulf Coast region caused by hurricanes Katrina and Rita. Officials in localities in hurricane-affected areas questioned their ability to participate in the LUCA Program. The continuous changes in housing stock may hinder local governments' ability to accurately update their address lists and maps. The condition of the housing stock is likely to present additional challenges for the Bureau's address canvassing operation (in which the Bureau verifies addresses) in the form of decreased productivity for Bureau staff, workforce shortages, and issues associated with identifying vacant and uninhabitable structures. The Bureau created a task force to assess the implications of storm-related issues that proposed a number of mitigating actions. However, the Bureau has no plans for modifying the address canvassing operation or subsequent operations in the Gulf Coast region.
A cookie is a short string of text that is sent from a Web server to a Web browser when the browser accesses a Web page. The information stored in a cookie includes, among other things, the name of the cookie, its unique identification number, its expiration date, and its domain. When a browser requests a page from the server that sent it a cookie, the browser sends a copy of that cookie back to the server. In general, most cookies are placed by the visited Web site. However, some Web sites also allow the placement of a third-party cookie—that is, a cookie placed on a visitor’s computer by a domain other than the site being visited. Cookies—whether placed by the visited Web site or a third-party—may be further classified as either session cookies or persistent cookies. Session cookies are short-lived, are used only during the current on-line session, and expire when the user exits the browser. For example, session cookies could be used to support an interactive opinion survey. Persistent cookies remain stored on the user’s computer until a specified expiration date and can be used by a Web site to track a user’s browsing behavior, through potential linkage to other data and whenever the user returns to a site. Although cookies help enable electronic commerce and other Web applications, persistent cookies also pose privacy risks even if they do not themselves gather personally identifiable information because the data contained in persistent cookies may be linked to persons after the fact, even when that was not the original intent of the operating Web site. For example, links may be established when persons accessing the Web site give out personal information, such as their names or e-mail addresses, which can uniquely identify them to the organization operating the Web site. Once a persistent cookie is linked to personally identifiable information, it is relatively easy to learn visitors’ browsing habits and keep track of viewed or downloaded Web pages. This practice raises concerns about the privacy of visitors to federal Web sites. Concerned about the protection of the privacy of visitors to federal Web sites, OMB directed—in Memorandum 99-18, issued in June 1999—every agency to post clear privacy policies on its principal Web site, other major entry points to agency Web sites, and any Web page where the agency collects substantial personal information from the public. Further, the memorandum stated that such polices must inform Web site visitors what information the agency collects about individuals, why it is collected, and how it is used, and that the policies must be clearly labeled and easily accessed when someone visits the site. In addition to these specific requirements, the memorandum was accompanied by an attachment entitled “Guidance and Model Language for Federal Web Site Privacy Policies.” OMB attached the guidance and model language for agencies to use, depending on their needs. For example, the discussion in the attachment states that in the course of operating a Web site, certain information may be collected automatically or by cookies, and that in some instances, sites may have the technical ability to collect information and later take additional steps to identify people. The discussion further states that agency privacy policies should make clear whether or not they are collecting this type of information and whether they will take further steps to collect additional information. In June 2000, OMB issued further guidance specifically concerning the use of cookies on federal Web sites. Memorandum 00-13 had two major objectives. First, it reminded agencies that they are required by law and policy to establish clear privacy policies for their Web activities and to comply with those policies. To this end, the memorandum reiterated the requirement of Memorandum 99-18 for agencies to post privacy policies on their principal Web sites, major entry points, and other Web pages where substantial amounts of personal information are posted. Second, Memorandum 00-13 established a new federal policy regarding cookies by stating that “particular privacy concerns may be raised when uses of Web technology can track the activities of users over time and across Web sites.” This guidance established a presumption that cookies would not be used on federal Web sites. Further, it provided that cookies could be used only when agencies (1) provide clear and conspicuous notice of their use, (2) have a compelling need to gather the data on the Web site, (3) have appropriate and publicly disclosed privacy safeguards for handling information derived from cookies, and (4) have personal approval by the head of the agency. The memorandum also directed agencies to provide a description of their privacy practices and the steps they have taken to ensure compliance with this memorandum as part of their information technology budget submission package. Concerned about the impact of Memorandum 00-13 on federal Web sites, the Chairman of the CIO Council’s Subcommittee on Privacy subsequently sent a letter to the Administrator of OMB’s OIRA recommending that session cookies be exempt from the requirements of the memorandum. The Chairman noted that the term “cookie” covers a number of techniques used to track information about Web site use, and that there is an important distinction between session and persistent cookies. Although supporting the application of the new policy to persistent cookies, the Chairman recommended that session cookies, which are discarded on completion of a session or expire within a short time frame and are not used to track personal information, not be subject to the requirements of the memorandum. He added that the use of these cookies should, however, continue to be disclosed in the Web sites’ privacy statements. In a September 2000 letter responding to the Chairman, the Administrator agreed that persistent cookies are a principal example of a technique for tracking the activities of users over time and across different Web sites, and, thus, agencies should not use persistent cookies unless they have met the four conditions provided in the guidance. Further, the Administrator noted that Web sites could gather information from visitors in ways that do not raise privacy concerns, such as retaining the information only during the session or for the purpose of completing a particular on-line transaction, without the capacity to track the user over time and across different Web sites. The letter concluded that such activities would not fall within the scope of the new policy. As of January 2001, most federal Web sites we reviewed followed OMB’s guidance on the use of cookies. Of the 65 federal Web sites reviewed, 57 did not use persistent cookies. However, of the eight Web sites using persistent cookies, four did so without disclosing this in their privacy policies, as required by OMB. Two of these four were allowing commercial, third-party sites to place these cookies on the computers of individuals visiting the sites. The four remaining sites using persistent cookies disclosed this use but did not meet OMB’s other conditions. In addition, four sites that did not use persistent cookies did not post privacy policies on their home pages. After we brought these findings to their attention, all 12 agencies either took corrective action or stated that they planned to take such action, as follows: The four sites using persistent cookies without disclosing such use have removed those cookies from their Web sites. Two of the four sites using persistent cookies with disclosure have now removed them. Regarding the other two sites, one has recently met all of OMB’s conditions in order to use persistent cookies. Agency officials responsible for the remaining site have revised their privacy policy to disclose the use of persistent cookies and have stated that they are in the process of seeking approval from the head of the agency to use such cookies. All four sites lacking privacy policy notices have now installed such statement hyperlinks on their respective home pages. Although OMB’s guidance has proved useful in ensuring that federal Web sites address privacy issues, the guidance is fragmented, with multiple documents addressing various aspects of Web site privacy and cookie issues. Guidance concerning cookies is currently contained in two official policy memorandums. These documents, taken together, prompted the CIO Council to recommend clarification of OMB’s cookie policy. Although OMB’s response provided useful clarification on the requirements for using persistent cookies, OMB has not yet revised the guidance memorandums themselves. Further, the letter to the CIO Council does not appear on OMB’s Web site with the two guidance memorandums. As a result, federal agencies may not have ready access to the clarifying letter and may be confused as to requirements on the use of cookies. OMB’s guidance documents also do not provide clear direction on the disclosure requirements for session cookies. Memorandum 99-18 stated that agency privacy policies should make clear whether information is collected automatically through cookies or other techniques, but it did not distinguish between session and persistent cookies. Memorandum 00-13 established the four conditions for cookie use but, again, did not clearly distinguish between session and persistent cookies. This prompted the CIO Council’s letter recommending clarification. OIRA’s letter in response clarified that Memorandum 00-13 applied only to persistent cookies but did not directly respond to the Council’s recommendation that session cookies continue to be disclosed in Web site privacy policies. This left unresolved questions as to what extent the notice requirements from Memorandum 99-18 apply to session cookies. When we asked OMB to clarify the disclosure requirements for session cookies, OIRA officials stated that session cookies do not present a privacy issue; therefore, no disclosure is required. This position, however, may confuse and mislead federal Web site visitors. For example, under this policy, a federal Web site may state in its privacy policy that it is not using cookies, while it continues to give session cookies. If a site visitor has enabled a browser to detect the presence of cookies, it may not be apparent to the visitor whether the cookies they see are session or persistent. This could raise questions about the practices of the Web site that would not be resolved by viewing the privacy policy. The Chair of the CIO Council’s Subcommittee on Privacy agreed that the issue is one of clarity rather than privacy. Further, he stated that it is better for agencies to choose full disclosure rather than partial, and that it constitutes good customer service to provide such disclosure. Most federal Web sites we reviewed were following OMB’s guidance on the use of cookies. The sites that were not following the guidance either have taken or plan to take corrective action. The OMB guidance, while helpful, leaves agencies to implement fragmented directives contained in multiple documents. In addition, the guidance itself is not clear on the disclosure requirements for techniques that do not track users over time and across Web sites, such as session cookies. Further, OMB’s stated position on the disclosure requirements for session cookies could lead to confusion on the part of visitors to federal Web sites. To clarify agency requirements on the use of automatic collections of information, including the use of cookies on their Web sites, we recommend that the Director, OMB, in consultation with other parties, such as agency officials and the CIO Council, unify OMB’s guidance on Web site privacy policies and the use of cookies, clarify the resulting guidance to provide comprehensive direction on the use of cookies by federal agencies on their Web sites, and consider directing federal agencies to disclose the use of session cookies in their Web site privacy notices. We provided a draft of this report for review and comment to the Director, OMB, on March 26, 2001. OMB did not provide comments. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. At that time we will provide copies of the report to Senator Joseph Lieberman, Ranking Member, Senate Committee on Governmental Affairs; Representative Dan Burton, Chairman, and Representative Henry A. Waxman, Ranking Minority Member, House Committee on Government Reform; the Honorable Mitchell E. Daniels, Jr., Director, Office of Management and Budget; and other interested parties. Copies will also be available on GAO’s Web site at www.gao.gov. If you have any questions, please contact me at (202) 512-6240 or Mike Dolak, Assistant Director, at (202) 512-6362. We can also be reached by e- mail at [email protected] and [email protected], respectively. Key contributors to this report were Scott A. Binder, Michael P. Fruitman, and David F. Plocher. To determine the use of cookies by federal agencies, we reviewed 65 federal Web sites—the same sites we reviewed in our October 2000 report. These Web sites consisted of (1) the sites operated by the 33 high- impact agencies, which handle the majority of the government’s contact with the public, and (2) 32 sites randomly selected from the General Services Administration’s government domain registry database. We reviewed each Web site between November and December, 2000, to determine which were using cookies and the type of cookies given. We also determined whether the sites using persistent cookies (1) provided clear and conspicuous notice of their use, (2) had a compelling need to gather the data on the site, (3) had appropriate and publicly disclosed privacy safeguards for handling information derived from cookies, and (4) had personal approval by the head of the agency. We updated our findings on January 24, 2001. We performed our review by using Microsoft’s Internet Explorer browser, version 5.5. We changed the security settings in the browser to alert us if we were about to receive a cookie. Before we would visit a Web site, we would clear out our computer’s cache, cookies, and temporary files and clear our history folder. We then typed in the Universal Resource Locator of the site we were visiting and spent about 10 to15 minutes per site searching through its links to determine if it was using cookies. To document our review, we made a printout of the site’s home page and privacy policies. If we found a persistent cookie on the site, we would make a printout of the cookie. After we captured and printed the cookie, we would stop searching and move on to another site. We contacted the agencies operating the Web sites that were using persistent cookies, notified them of our findings, and asked them to provide written responses detailing actions they planned to take in response to our findings and provide documentation to support their compliance with the Office of Management and Budget’s (OMB) guidelines. Specifically, we asked them to support how they (1) provided clear and conspicuous notice that they were using persistent cookies, (2) had a compelling need to gather the data on the site, (3) had appropriate and publicly disclosed privacy safeguards for handling the information derived from cookies, and (4) had obtained the personal approval by the head of the agency. We also contacted the four agencies that did not have privacy policies posted on their home pages, notified them of our findings, and asked them to provide written responses detailing the actions they planned to take to ensure that their Web sites complied with OMB guidance. To determine whether the guidance issued by OMB provided adequate direction to federal agencies operating public Web sites, we analyzed the guidance and discussed its intent with representatives of OMB’s Office of Information and Regulatory Affairs. We also met with the Chairman of the Chief Information Officers Council, Subcommittee on Privacy, to obtain the council’s views on additional privacy issues and concerns that needed to be addressed in OMB guidance. We conducted our review from August 2000 through March 2001, in accordance with generally accepted government auditing standards.
Federal agencies are using Internet "cookies" to enable electronic transactions and track visitors on their websites. Cookies are text files that have unique identifiers and are used to store and retrieve information that allow websites to recognize returning users, track on-line purchases, or maintain and serve customized web pages. This report discusses whether (1) federal websites complied with the Office of Management and Budget's (OMB) guidance on the use of cookies and (2) the guidance provided federal agencies with clear instructions on the use of cookies. GAO reviewed 65 websites randomly selected from the General Services Administration's government domain registry database between November 2000 and January 2001 to determine whether they used persistent cookies and whether such use was disclosed in the website's privacy policy. As of January 2001, most of the websites reviewed were following OMB's guidance on the use of cookies. Of the 65 sites GAO reviewed, 57 did not use persistent cookies on their websites, eight used persistent cookies, four did not disclose such use in their privacy policy, and the remaining four sites using persistent cookies did provide disclosure but did not meet OMB's other conditions for using cookies. In addition, four other sites that did not use cookies did not post privacy policies on their home pages. Those sites were taking, or planning to take, corrective action to address their noncompliance with OMB guidance. GAO found that although OMB's guidance proved useful in ensuring that federal websites address privacy issues, the guidance remained fragmented, with multiple documents addressing various aspects of Web site privacy and cookie issues. In addition, the guidance did not provide clear direction on the disclosure of session cookies.
In July 1997, the FCC estimated that U.S. consumers could choose from over 500 long-distance service providers. Slamming subverts that choice because it changes a consumer’s long-distance provider without the consumer’s knowledge and consent. It distorts telecommunications markets by enabling companies engaged in misleading practices to increase their customer bases, revenues, and profitability through illegal means. In addition, slammed consumers are often overcharged, according to the FCC and the industry; are unable to use their preferred long-distance service; cannot use calling cards in emergencies or while traveling; and lose premiums (e.g., frequent flyer miles or free minutes of long-distance calls) provided by their properly authorized provider. Collectively, slamming increases the costs to long-distance providers and other firms involved in this industry. Their increased costs occur when slamming victims refuse to pay the charges of unauthorized service providers or when slammers themselves take the profits and leave unpaid bills, sometimes amounting to millions of dollars. Determining the prevalence of slamming is extremely difficult. Although the FCC began receiving slamming complaints after the divestiture of AT&T in 1985, no central repository exists for slamming complaints; and no entity, in our opinion, has made a significant effort to estimate the prevalence of slamming. Contributing to the uncertainty concerning the prevalence of slamming, some consumers, who do not review their monthly telephone bills closely, are unaware that they have been slammed. Others may be aware that they were slammed but take no corrective action, such as filing a complaint. Customers can voluntarily change their long-distance company—or Primary Interexchange Carrier (PIC)—by contacting, or submitting an “order” to, the local exchange carrier. Long-distance companies can also legitimately process a PIC change to which the customer has agreed through either a written or verbal authorization. The three types of long-distance providers are facility-based carriers such as AT&T, MCI, and Sprint; switching resellers; and switchless resellers. According to representatives of the FCC, numerous state regulatory agencies, and the industry, those who most frequently engage in intentional slamming are switchless resellers. They have the least to lose by using deceptive or fraudulent practices because they have no substantive investment in the industry. Nevertheless, the economic incentives for slamming are shared by all long-distance providers. Facility-based carriers have an economic incentive to slam because they have high fixed costs for network equipment and low costs for providing service to additional consumers. Thus, providing service to additional consumers, even without authorization, adds to a carrier’s cash flow with little additional cost. Conversely, those same high fixed costs represent a strong commitment to the long-distance industry and a need to maintain the trust, and business, of their existing customers. Resellers—switching and switchless—also provide long-distance service to their customers. Switching resellers maintain and operate switching equipment to connect their customers to the networks of facility-based carriers. Switchless resellers, however, have no equipment and generally rely on facility-based carriers and other resellers to service their customers. Resellers make a profit by selling long-distance services to their customers at rates that are higher than the fees the resellers pay to facility-based carriers for handling their customers’ calls. Both switching and switchless resellers have an economic incentive to slam because additional customers increase their profits. Further, unscrupulous telemarketers, that contract with a long-distance provider, may slam consumers to increase their commissions (e.g., a flat fee for every customer switched). However, entrepreneurial criminals engaged in slamming operations prefer acting as switchless resellers to generate fast profits and to make criminal prosecution more difficult. They have few, if any, overhead costs and need little, if any, financial investment in their businesses. In addition, the cost of filing the required tariff—or schedule of services, rates, and charges—with the FCC to initiate a business is inexpensive; and an unscrupulous individual can avoid that cost altogether. The unscrupulous reseller can then slam customers, collect payments from them, and run—leaving unpaid bills to the facility-based carrier and other entities, such as billing companies, that assisted the reseller. If the reseller did not submit correct information to the FCC or state regulatory agencies, the likelihood of getting caught and prosecuted is negligible. The owner/operator of our case-study companies used such tactics. (See app. I.) His eight known switchless reselling companies operated at various times between 1993 and 1996, charged their customers at least $20 million, and have been fined hundreds of thousands of dollars by state regulatory agencies and the FCC. However, neither the FCC nor we were able to locate him in 1997 or to date in 1998 because he has concealed his whereabouts. Both business and individual consumers must select a PIC to provide their long-distance service through their local exchange carrier. Intentional slamming is thus possible because the legitimate ways a consumer’s PIC are changed (see following section) can be manipulated easily and in a fraudulent manner. Slamming can occur through deceptive marketing practices—whether by facility-based carriers, resellers, or telemarketers acting on their behalf—by which consumers are misled into signing an authorization to switch their PIC. Unscrupulous telemarketers or long-distance providers may also falsify records to make it appear that the consumer agreed verbally or in writing to the switch. It is also possible to slam consumers without ever contacting them, such as by obtaining their telephone numbers from a telephone book and submitting them to the local exchange carrier for changing. As an FCC Commissioner stated before a U.S. Senate subcommittee, “slamming scenarios involve [, among other methods,] deceptive sweepstakes, misleading forms, forged signatures and telemarketers who do not understand the word no.” Although the FCC, most states, and the telecommunications industry have some antislamming rules and practices in place, each relies on the others to be the main forces in the antislamming battle. Of the antislamming efforts, those by some states are the most extensive. However, we found no effective antislamming effort to keep unscrupulous individuals from becoming a long-distance provider. For example, the FCC does not review information submitted to it in tariff filings that may alert it to unethical applicants. In addition, the FCC lags far behind some individual state regulatory agencies in the amount of fines imposed on companies for slamming. The FCC first adopted antislamming measures in 1985 and has subsequently promulgated regulations to improve its antislamming efforts. For example, in 1992 as a result of an increase in telemarketing, the FCC required long-distance providers to obtain one of four forms of verification concerning change-orders generated by telemarketing. Verification would occur upon the customer’s written authorization; the customer’s electronic authorization placed from the telephone number for which the PIC was to be changed; receipt of the customer’s oral authorization by an independent third party, operating in a location physically separate from the telemarketing representative; or the long-distance provider’s mailing of an information package to the customer within 3 business days of the customer’s request for a PIC change. In 1995, as a result of receiving thousands of slamming complaints, the FCC again revised its regulations. The revision, in part, prohibited the potentially deceptive or confusing practice of combining a letter of agency (LOA) with promotional materials sent to consumers. However, we found nothing in FCC practices that would effectively curtail unscrupulous individuals from entering the telecommunications industry. And no FCC regulation discusses what preventive measures the FCC should take to ensure that long-distance-provider applicants have a satisfactory record of integrity and business ethics. Further, according to FCC’s Deputy Director for Enforcement, Common Carrier Bureau, Enforcement Division, the FCC relies largely on state regulatory agencies and the industry’s self-regulating measures for antislamming efforts. According to representatives from state regulatory agencies, facility-based carriers, resellers of long-distance services, and others in the industry, they view an entity’s possession of an FCC tariff as a key credential for a long-distance provider. Each long-distance service provider is now required to file a tariff with the FCC, including information that should allow the FCC to contact the provider about, among other matters, an inordinate number of slamming complaints against it. However, according to knowledgeable FCC officials, the FCC merely accepts a tariff filing and does not review a filed tariff’s information, including that regarding the applicant. Thus, the filing procedure is no deterrent to a determined slammer. Neither does the procedure support the validity that states and the industry place on an entity that has filed an FCC tariff. For example, we easily filed a tariff with the FCC through deceptive means during our investigation when testing FCC’s oversight of the tariff-filing procedure. In short, although we submitted fictitious information for the tariff and did not pay FCC’s required $600 application fee, we received FCC’s stamp of approval. Thus, with a tariff on file, our fictitious company—PSI Communications—is able to do business and slam consumers as a switchless reseller with little chance of adverse consequences. Another antislamming measure—the FCC’s Common Carrier Scorecard—publicizes the more flagrant slammers, but it is inaccurate. The FCC prepares the scorecard, which lists the long-distance providers about which the FCC has received numerous slamming complaints, for the telecommunications industry and the public. The scorecard also compares those providers by citing the ratio of the number of complaints per million dollars of company revenue. However, it presents an inaccurate picture because it severely understates the number of complaints per million dollars of revenue for resellers. This occurs because resellers are not required to, and generally do not, report their revenue to the FCC unless that revenue exceeds $109 million. Therefore, in the absence of actual data and for the sake of comparison, the FCC assumes that those resellers had $109 million in revenue. This assumption results in unrealistically low complaint-to-revenue ratios for a large number of resellers. According to representatives of some state regulatory agencies, states rely largely on the FCC and the industry’s self-regulating measures for antislamming efforts. While most state regulatory agencies have some licensing procedures and requirements for an entity to become a long-distance service provider, those procedures/requirements vary from negligible to restrictive. For example, Utah does not regulate long-distance service providers. In contrast, in Georgia, switchless resellers must first file an application with the state public utility commission and provide a copy to the governor’s Office of Consumer Affairs. The commission then reviews the submission, determines whether to issue an interim certificate, and rereviews the interim certificate after 12 months to determine whether to issue a permanent certificate. In addition, switchless resellers must adhere to Georgia commission rules. The telecommunications industry also attempts to weed out companies involved in slamming. For example, various facility-based carriers have different antislamming measures based on the companies’ marketing philosophies. Such measures include MCI’s emphasis on the use of third party verifications and AT&T’s emphasis on use of written authorizations, or LOAs. In addition, a facility-based carrier may question a reseller’s submission of a large number of telephone numbers at one time. However, we found few activities that resellers were undertaking to curtail slamming. In addition, we found no industry practices that would effectively keep unscrupulous individuals from entering the telecommunications industry. Moreover, according to officials of a reselling company and a billing company, the industry largely relies on the FCC and state regulatory agencies for antislamming measures. Indeed, the most effective antislamming measure appears to be one that consumers themselves can effect against all but the most resourceful of slammers—a “PIC freeze.” The individual customer can contact the local exchange carrier and request a PIC freeze, in essence freezing the customer’s choice of long-distance providers from change. The customer may lift the freeze by recontacting the local exchange carrier and answering certain identifying questions about the customer’s account. In comparison with some states’ actions, the FCC has taken little punitive action against slammers. During 1997, the FCC obtained consent decrees from nine companies nationwide that paid $1,245,000 in fines because of slamming. However, in May 1997, the California Public Utilities Commission suspended one firm for 3 years because of slamming, fined it $2 million, and ordered it to refund another $2 million to its customers. Further, within the same general time period, other state regulatory commissions took more extensive actions than did the FCC against the same companies. For example, In December 1996, the California Public Utilities Commission reached a settlement with another company and its affiliate that were involved in slamming. The settlement suspended the firms from offering long-distance service in California for 40 months and required the firms to offer $600,000 in refunds to 32,000 customers that had complained about slamming. In comparison, during 1997, the FCC issued a Notice of Apparent Liability to this company for $200,000 for apparent slamming violations. In February 1998, the Florida Public Service Commission voted to require a third firm to show cause, in writing, why it should not be fined $500,000 for slamming violations. (This firm is also the subject of numerous slamming complaints in New Jersey and Tennessee.) In comparison, during 1997 the FCC issued a Notice of Apparent Liability to this firm amounting to only $80,000 for apparent slamming violations. Further, the FCC takes an inordinate amount of time, as acknowledged by FCC officials, to identify companies that slam consumers and to issue orders for corrective actions (i.e., fines, suspensions) or to bar them from doing business altogether. For example, Mr. Fletcher, the owner/operator of the case-study companies, began his large-scale slamming activities in 1995. But it was not until June 1997 that the FCC initiated enforcement action against the eight known Fletcher-controlled companies with an Order to Show Cause and Notice of Opportunity for Hearing. In the order, the FCC indicated that it had substantial evidence that the companies had ignored FCC’s PIC-change verification procedures and routinely submitted PIC-change requests that were based on forged or falsified LOAs. The FCC thus directed Mr. Fletcher and his companies to show cause in an evidentiary hearing why the FCC should not require them to cease providing long-distance services without prior FCC consent and why the companies’ operating authority should not be revoked. Because Mr. Fletcher waived his right to a hearing when he did not file a “written appearance,” stating that he would appear for such a hearing, the FCC could have entered an order detailing its final enforcement action against the Fletcher companies and Mr. Fletcher. However, as of March 1998, the FCC had taken no such action. Neither the FCC, the states, nor the telecommunications industry have been effective in protecting the consumer from telephone slamming. Because of the lack of FCC diligence, companies can become long-distance service providers without providing accurate background information. Some states have taken significant action to protect consumers from slamming, but others have taken little action or have no antislamming regulations. Further, the industry approach to slamming appears to be largely market-driven rather than consumer-oriented. Given this environment, unscrupulous long-distance providers slam consumers, often with virtual impunity. As a consequence, consumers and the industry itself are becoming increasingly vulnerable as targets for large scale fraud. The most effective action that consumers can take to eliminate the chance of intentional slamming is to have their local exchange carrier freeze their choice of long-distance providers. Our investigation took place between January and March 1998. We interviewed representatives of the FCC and long-distance providers, including facility-based carriers and resellers. In addition, we interviewed representatives of billing and data-processing firms servicing long-distance providers. We reviewed available public records on slamming including prior congressional hearings and documents belonging to long-distance providers. These included AT&T documents provided to us pursuant to a subpoena issued by the Permanent Subcommittee on Investigations, Senate Committee on Governmental Affairs. Further, through the National Association of State Regulatory Agencies, we obtained and reviewed information from state entities that regulate long-distance service providers. To determine the extent of FCC’s oversight of tariff filings, we filed fictitious documentation with the FCC and did not pay the required filing fee. As arranged with your office, unless you announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. At that time, we will send copies to interested congressional committees and the Chairman of the Federal Communications Commission. Copies of this report will also be made available to others upon request. If you have any questions about our investigation, please call me at (202) 512-7455 or Assistant Director Ronald Malfi of my staff at (202) 512-7420. This case study is based on our limited investigation of four of Daniel H. Fletcher’s eight known business ventures operating as long-distance providers between 1993 and 1996. Through each business, it appears that Mr. Fletcher slammed or attempted to slam many thousands of consumers. As a further indication of the extent of his dealings, industry records, although incomplete, indicate that between 1993 and 1996 two of Mr. Fletcher’s companies billed their customers more than $20 million in long-distance charges. Mr. Fletcher apparently began reselling long-distance services in 1993. By mid-1996, the industry firms dealing with Mr. Fletcher’s companies began to end those dealings because of his customers’ slamming complaints and/or his nonpayment for long-distance network usage by his customers. Collectively, these firms claim that Mr. Fletcher’s companies owe them $3.8 million. Another firm has obtained a $10-million judgment against one Fletcher company. Mr. Fletcher’s companies have also come under regulatory scrutiny by several states and the FCC. For example, in 1997 the Florida Public Service Commission cancelled the right of one Fletcher-controlled company—Phone Calls, Inc. (PCI)—to do business in the state and fined it $860,000 for slamming. New York also took action against PCI in 1997. In May 1997, the FCC ordered another Fletcher company—Long Distance Services, Inc.—to forfeit $80,000 to the United States “for violating the Commission’s rules and orders” when it changed (or caused the change of) the long-distance providers of two customers without authorization and through the use of apparently forged LOAs. The FCC did not refer the $80,000 forfeiture to the U. S. Department of Justice for collection, according to an FCC official, because the Justice Department had previously failed to take action with similar cases. In addition, in June 1997, the FCC, citing numerous complaints and evidence of forged or falsified LOAs, issued an Order to Show Cause and Notice of Opportunity for Hearing regarding Mr. Fletcher and his eight companies. In that order, the FCC, in effect, directed Mr. Fletcher and his companies to show cause why the FCC should not require them to stop providing long-distance services without prior FCC consent and why the companies’ operating authority should not be revoked. However, since Mr. Fletcher did not provide the FCC a written appearance, or explanation, the FCC could have entered the order, citing FCC’s final enforcement action. However, as of March 1998, the FCC had not done so. It appears that all eight known Fletcher-controlled companies were out of business by the end of 1996. However, our investigation identified several instances of Mr. Fletcher’s continued involvement since then in the telecommunications industry. We have been unable to locate Mr. Fletcher for his response to the allegations because he knowingly used false information to conceal his identity and the location of his companies and residence(s). Based on an introduction by a Sprint representative, Mr. Fletcher’s long-distance reselling business Christian Church Network, Inc. (doing business as Church Discount Group, Inc.) entered into a contract on August 18, 1993, with Billing Concepts and Sprint. Under the terms of the contract, Christian Church Network submitted electronic records to Billing Concepts, representing its customers’ long-distance calls made over Sprint’s network. Billing Concepts (1) advanced 70 percent of the calls’ cost (as charged by the Fletcher company) to Sprint and (2) retained 30 percent in reserve for its administrative costs and potential nonpayment by the Fletcher company’s customers. Sprint deducted its network charges and sent the remainder to Christian Church Network. Under this arrangement, Billing Concepts sent the electronic records of the customers’ long-distance calls to the appropriate local exchange carriers for billing (at Christian Church Network’s charged rate) and collection. Within 60 days, the local exchange carriers sent approximately 95 percent of the billings’ value to Billing Concepts for the Fletcher company. The local exchange carriers withheld 5 percent for possible nonpayment by the Fletcher company’s customers. On July 22, 1994, Sprint, Billing Concepts, and Mr. Fletcher’s Christian Church Network modified their agreement whereby Billing Concepts would advance 70 percent of the billings directly to the Fletcher company rather than to Sprint. The Fletcher company was to pay Sprint for its network charges from the advances. Then from November 1994 to July 1995, the company did not receive advances from Billing Concepts and instead paid Sprint from payments received from the local exchange carriers. However, starting in July 1995, the Fletcher company requested and again received 70-percent advances from Billing Concepts. From November 1995 through April 1996, Christian Church Network produced a tenfold increase in the billable customer base. Between January and April 1996, the company also apparently stopped paying Sprint for its customers’ network usage, keeping the full 70-percent advance from Billing Concepts as its profit. Further, in July 1996, Mr. Fletcher—representing another of his eight companies, Long Distance Services, Inc.—signed a second contract with Billing Concepts. Billing Concepts continued advances to Christian Church Network until September 1996. Then, after receiving a large number of slamming complaints from Christian Church Network’s customers following the increase in the company’s customer base, Billing Concepts terminated all business with both Fletcher companies. From December 1993 through December 1996, the two Fletcher companies submitted over $12,432,000 in bills for long-distance usage to be forwarded to their customers. When Billing Concepts terminated business with the two Fletcher companies in September 1996 because of the alleged slamming, it had already advanced the companies more than it would receive from the local exchange carriers. (Those carriers returned less than had been billed because some customers did not pay after learning they had been slammed.) Billing Concepts claims that the two Fletcher companies owe it approximately $586,000 that it was unable to collect from the local exchange carriers. In addition, Sprint terminated its business relationship with Christian Church Network and Long Distance Services in September 1996 for nonpayment of outstanding network charges. Sprint claims that the two companies still owe it about $547,000 for that nonpayment. (Sprint attempted to renegotiate its contract with Mr. Fletcher’s Christian Church Network before the termination. Our investigation indicates that Mr. Fletcher instead took his increased customer base to Atlas Communications via another of his eight companies, Phone Calls, Inc. [PCI], and did not pay Sprint. See later discussion regarding PCI and Atlas.) On October 19, 1994, Mr. Fletcher, doing business as Long Distance Services, Inc., signed a contract with AT&T to place his customers on its network. The agreement called for Long Distance Services to purchase a minimum of $300,000 of long-distance service annually. AT&T’s incomplete records indicated that starting in March 1996, the Fletcher company began to dramatically increase the number of new customers to be placed on AT&T’s network. During an April 8, 1996, telephone call to AT&T and in an April 9, 1996, letter sent via facsimile, Mr. Fletcher requested that AT&T confirm that (1) AT&T had accepted the new customers that his company had transmitted to AT&T since March 1, 1996, and (2) AT&T had put them on line. According to Mr. Fletcher’s letter, his Long Distance Services had requested that more than 540,000 new customers be switched to AT&T. The letter also noted that the company was sending an additional 95,000 customer telephone numbers that day. In an April 9, 1996, return letter to Mr. Fletcher, AT&T questioned his customer base and his customers’ letters of agency (LOA) authorizing the change of long-distance companies. AT&T requested that Mr. Fletcher forward a sampling of the LOAs, and Mr. Fletcher provided approximately 1,000. In another letter to Mr. Fletcher, dated April 16, 1996, AT&T provided reasons why it believed the LOAs were in violation of FCC regulations (47 C.F.R. section 64.1150): (1) the LOAs had been combined with a commercial inducement, (2) Mr. Fletcher’s LOA form did not clearly indicate that the form was authorizing a change to the customer’s Primary Interexchange Carrier (PIC), and (3) it did not identify the carrier to which the subscriber would be switched. On April 25, 1996, AT&T wrote Mr. Fletcher informing him that it had rejected all “orders” (new customers) sent by Long Distance Services, Inc., presumably since March 1, 1996. Although AT&T recognized a problem with Mr. Fletcher and his business practices during April 1996, it continued service to Long Distance Services, Inc. until November 1, 1997, when it discontinued service for nonpayment for network usage. According to an AT&T representative, Long Distance Services, Inc. still owes AT&T over $1,652,000. On January 5, 1995, Mr. Fletcher, doing business as Discount Calling Card, Inc., signed a contract with Integretal, a billing company. Although Integretal officials provided us little information, stating that the information was missing, we did determine the following. From May 5, 1995, through February 26, 1996, Integretal processed approximately $8,220,000 in long-distance call billings for Discount Calling Card customers. Under the terms of its agreement, Integretal advanced the Fletcher company 70 percent of the billing value of the electronic records of calls submitted by the company. Integretal was contractually entitled to retain 30 percent of the calls’ value for processing and potential nonpayment by Discount Calling Card’s customers. Because of billing complaints made by Discount Calling Card’s customers, Integretal claims that it lost about $1,144,000 that it was unable to recover from the company. Integretal stopped doing business with Discount Calling Card in November 1996 because of numerous customer complaints. On June 18, 1996, the Fletcher-controlled Phone Calls, Inc. (PCI) and Atlas Communications, Inc. signed a business contract for PCI’s customers to be placed on Atlas’ network (Sprint). In early July 1996, PCI provided its customer base of 544,000 telephone numbers to Atlas. (Information developed by our investigation suggests that Fletcher companies slammed these customers largely from the customer base they had given to Billing Concepts.) Subsequently, Atlas provided the PCI customer telephone numbers to Sprint for placement on Sprint’s network. However, within the next several weeks, Atlas was able to place only about 200,000 telephone numbers from PCI’s customer base on Sprint’s network. This occurred, according to Atlas representatives, because (1) the individual consumers had placed a PIC freeze with their local exchange carriers, preventing the change or (2) the telephone numbers were inoperative. Because of this low placement rate, Atlas became concerned that PCI was slamming customers and elected not to honor its contract. Subsequently, on August 19, 1996, PCI filed a lawsuit against Atlas in Pennsylvania, attempting to obtain (as per the original contract) the raw record material representing the details of its customers’ telephone usage, which would allow PCI to bill its customers. Sprint had supplied this raw record material to Atlas. In August 1996, Atlas submitted evidence, in the breach-of-contract suit brought by PCI, indicating that many slamming complaints had been made against PCI. For example, after the first bills, representing PCI customers’ calls for July and August 1996, had been sent out, an unusually high percentage (approximately 30 percent) of PCI customers lodged complaints with regulators and government law enforcement agencies—including the FCC, various public utility commissions, and various state attorneys general; Sprint; and numerous local exchange carriers. According to an Atlas representative, Atlas attempted to answer these complaints and reviewed the customers’ LOAs authorizing the change of long-distance companies. After the review, Atlas believed that a number of the LOAs were forgeries. According to the vice president of Atlas Communications, the judge issued a temporary restraining order, preventing PCI from obtaining the raw record material. The judge also agreed to allow Atlas to charge PCI’s customers at the existing standard AT&T long-distance rates (as the most prevalent U.S. service) rather than PCI’s excessively high rates. Subsequently, Atlas entered into a contract with US Billing to perform billing-clearinghouse services for Atlas regarding PCI’s customers. In this instance, Atlas’ prompt action prevented PCI from receiving any payments for its customers’ long-distance calls. By February 1998, Atlas was serving less than 20 percent of the original 200,000 PCI customers that had been successfully placed on Sprint’s network. This sharp drop in the customer base occurred, according to an Atlas representative, largely because PCI had initially slammed the customers. On the basis of the 1996 suit in Pennsylvania, Atlas obtained a $10-million judgment against the Fletcher-controlled PCI because, according to the court, PCI fraudulently obtained customers to switch their long-distance telephone service to Atlas’ network; identified customers to Atlas, for Atlas’ placement on its network, in states within which PCI was not certificated as a long-distance service provider; failed to supply customer service to those customers it had caused Atlas to place on its network; and failed to supply customers, Atlas, or regulatory agencies with those customers’ LOAs upon request. Further, in August 1997, the Florida Public Service Commission fined the Fletcher-controlled PCI $860,000 for slamming, failing to respond to commission inquiries, and misusing its certificate to provide telecommunications service in Florida. This fine was in addition to the commission’s March 1997 cancellation of PCI’s certificate. According to a statement by the chairman of the commission, PCI accounted for over 400 of the nearly 2,400 slamming complaints received by the commission in 1996. This was the largest number of complaints logged by the commission against any company in a similar period. New York regulators also revoked PCI’s license in mid-1997. Barbara Coles, Senior Attorney The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on: (1) which entities or companies engage in telephone slamming violations; (2) the process by which the providers defraud consumers; and (3) what the Federal Communications Commission (FCC), state regulatory entities, and the telecommunications industry has done to curtail slamming. GAO noted that: (1) all three types of long-distance providers--facility-based carriers, which have extensive physical equipment, switching resellers, which have one or more switching stations, and switchless resellers, having the least to lose and the most to gain, most frequently engage in intentional slamming, according to the FCC, state regulatory agencies, and the telecommunications industry; (2) intentional slamming is accomplished by deceptive practices; (3) these include falsifying documents that authorize a switch and misleading customers into signing such a document; (4) the FCC, state regulatory agencies, and the telecommunications industry rely on the others to be the main forces against intentional slamming; (5) however, with regard to the FCC, its antislamming measures effectively do little to protect consumers from slamming; (6) although representatives of state regulatory agencies and the industry view a provider's FCC tariff--a schedule of services, rates, and charges--as a key credential, the FCC places no significance on the tariffs that long-distance providers are required to file with it before providing service; (7) although the FCC in 1996 attempted to regulate tariffs out of existence, a circuit court stayed that FCC regulation in 1997 as a result of a lawsuit; (8) the FCC now accepts tariffs; however, it does not review the tariff information; (9) thus, having a tariff on file with the FCC is no guarantee of a long-distance provider's integrity or of FCC's ability to penalize a provider that slams consumers; (10) as part of GAO's investigation and using fictitious information, GAO easily filed a tariff with the FCC and could now, as a switchless reseller, slam consumers with little chance of being caught; (11) state regulatory measures that could preclude slamming range from none in a few states to extensive in others; (12) industry's antislamming measures appear to be more market-driven; and (13) however, a Primary Interexchange Carriers freeze--an action that consumers can take by contacting their local exchange carrier and freezing their choice of Primary Interexchange Carriers, or long distance providers--effectively reduces the chance of intentional slamming.
Information security is a critical consideration for any organization reliant on information technology (IT) and especially important for government agencies, where maintaining the public’s trust is essential. The dramatic expansion in computer interconnectivity, and the rapid increase in the use of the Internet, have changed the way our government, the nation, and much of the world communicate and conduct business. However, without proper safeguards, systems are unprotected from attempts by individuals and groups with malicious intent to intrude and use the access to obtain sensitive information, commit fraud, disrupt operations, or launch attacks against other computer systems and networks. This concern is well- founded for a number of reasons, including the dramatic increase in reports of security incidents, the ease of obtaining and using hacking tools, the steady advance in the sophistication and effectiveness of attack technology, and the dire warnings of new and more destructive attacks to come. Cyber threats to federal information systems and cyber-based critical infrastructures are evolving and growing. These threats can be unintentional or intentional, targeted or nontargeted, and can come from a variety of sources, such as foreign nations engaged in espionage and information warfare, criminals, hackers, virus writers, and disgruntled employees and contractors working within an organization. Moreover, these groups and individuals have a variety of attack techniques at their disposal, and cyber exploitation activity has grown more sophisticated, more targeted, and more serious. As government, private sector, and personal activities continue to move to networked operations, as digital systems add ever more capabilities, as wireless systems become more ubiquitous, and as the design, manufacture, and service of IT have moved overseas, the threat will continue to grow. In the absence of robust security programs, federal agencies have experienced a wide range of incidents involving data loss or theft and computer intrusions, underscoring the need for improved security practices. Recognizing the importance of securing federal agencies’ information and systems, Congress enacted the Federal Information Security Management Act of 2002 (FISMA) to strengthen the security of information and information systems within federal agencies. FISMA requires each agency to use a risk-based approach to develop, document, and implement an agencywide security program for the information and information systems that support the operations and assets of the agency, including those provided or managed by another agency, contractor, or other source. The National Aeronautics and Space Act of 1958 (Space Act), as amended, established NASA as the civilian agency that exercises control over U.S. aeronautical and space activities and seeks and encourages the fullest commercial use of space. NASA’s mission is to pioneer the future of space exploration, scientific discovery, and aeronautics research. Its current and planned activities span a broad range of complex and technical endeavors, including deploying a global climate change research and monitoring system, returning Americans to the Moon and exploring other destinations, flying the Space Shuttle to complete the International Space Station, and developing new space transportation systems. (Clevelnd, OH) (Greenelt, MD) (Washington, D.C.) (Hmpton, VA) (Cpe Cverl, FL) (Houston, TX) (Hntville, AL) (Hncock Conty, MS) Federally Funded Research and Development Centers meet some special long-term research or development needs of the government and are operated, managed, and/or administered by either a university or consortium of universities, other not-for-profit or nonprofit organizations, or an industrial firm, as an autonomous organization or as an identifiable separate operating unit of a parent organization. Headquarters is responsible for providing the agency’s strategic direction, top-level requirements, schedules, budgets, and oversight of its mission. The NASA Administrator is responsible for leading the agency and is accountable for all aspects of its mission, including establishing and articulating its vision and strategic priorities and ensuring successful implementation of supporting policies, programs, and performance assessments. In this regard, the Office of the Administrator has overall responsibility for overseeing the activities and functions of the agency’s mission and mission support directorates and centers. NASA Headquarters has the following four mission directorates that define the agency’s major lines of business or core mission segments: Aeronautics Research pursues long-term, innovative, and cutting-edge research that develops tools, concepts, and technologies to enable a safer, more flexible, environmentally friendly, and more efficient national air transportation system. It also supports the agency’s human and robotic reentry vehicle research. Exploration Systems is leading the effort to develop capabilities for sustained and affordable human and robotic missions. The directorate is focused on developing the agency’s next generation of human exploration spacecraft designed to carry crew and cargo to low Earth orbit and beyond, and partnering with industry and expanding the commercial technology sector. The directorate’s responsibilities include operating the Lunar Reconnaissance Orbiter, Ares V Cargo Launch Vehicle, and Orion Crew Exploration Vehicle. Science carries out the scientific exploration of Earth and space to expand the frontiers of earth science, heliophysics, planetary science, and astrophysics. Through a variety of robotic observatory and explorer craft, and through sponsored research, the directorate provides virtual human access to the farthest reaches of space and time, as well as practical information about changes on Earth. The directorate’s responsibilities include operating the Cassini orbiter, Hubble Space Telescope, and James Webb Space Telescope. Space Operations provides mission critical space exploration services to both NASA customers and to other partners within the United States and throughout the world. The directorate’s responsibilities include flying the Space Shuttle to assemble the International Space Station, operating it after assembly is completed, and ensuring the health and safety of astronauts. Each of the agency’s four directorates is responsible and accountable for mission safety and success for the programs and projects assigned to it. Figure 2 contains images and artist renderings of some of the spacecraft that are deployed or in development that support the agency’s programs and projects. NASA headquarters also consists of mission support offices and other offices that advise the administrator and carry out the common or shared services that support core mission segments. These support offices include the Office of Chief Safety and Mission Assurance, Office of Security and Program Protection, Office of the Chief Financial Officer, Office of the Chief Information Officer, Office of the Inspector General, and Office of Institutions and Management. See appendix II for the agency’s organization chart. Centers are responsible for executing the agency programs and projects. Each center has a director who reports to an Associate Administrator in the Office of the Administrator. A key institutional role of center directors is that of service across mission directorate needs and determining how best to support the various programs and projects hosted at a given center. Specific responsibilities include (1) providing resources and managing center operations; (2) ensuring that statutory, regulatory, fiduciary, and NASA requirements are met; and (3) establishing and maintaining the staff and their competency. JPL is a Federally Funded Research and Development Center that is operated by the California Institute of Technology using government- owned equipment. The California Institute of Technology is under a contract with NASA that is renegotiated every 5 years. JPL develops and maintains technical and managerial competencies specified in the contract in support of NASA’s programs and projects including (1) exploring the solar system to fully understand its formation and evolution, (2) establishing continuous permanent robotic presence on Mars to discover its history and habitability, and (3) conducting communications and navigation for deep space missions. Headquarters, centers, and JPL support multiple mission directorates by taking on management responsibility and contributing to their programs and projects. See appendix III for a description of the missions of the individual centers and JPL. Table 1 identifies the mission directorates supported by each of these entities. In fiscal year 2009, NASA had a budget of $17.78 billion and employed approximately 18,000 civil service employees and utilized approximately 30,000 contractor employees. NASA’s budget request for fiscal year 2010 is $18.686 billion, which is roughly a 5 percent increase from fiscal year 2009. The agency’s IT budget in fiscal year 2009 was $1.6 billion, of which $15 million was dedicated to IT security. The Space Act authorizes and encourages NASA to enter into partnerships that help fulfill its mission. Thus, the agency engages in strategic partnerships with other federal agencies, and a wide variety of academic, private sector, and international organizations to leverage their unique capabilities. For example, the agency partners with (1) the space agencies of Canada, Japan, and Russia as well as European Space Agency country members Belgium, Denmark, France, Germany, Italy, Netherlands, Norway, Spain, Sweden, and the United Kingdom; (2) federal agencies such as the Federal Aviation Administration, the Department of Energy, the National Oceanic and Atmospheric Administration, and the U.S. Air Force, Army, and Navy; (3) institutes, organizations, and universities in India, Finland, France, Latin America, New Zealand, the United Kingdom, and the United States; and (4) corporations such as Boeing and Lockheed Martin. NASA depends on a number of key computer systems and communication networks to conduct its work. These networks traverse the Earth and beyond providing critical two-way communication links between Earth and spacecraft; connections between NASA centers and partners, scientists, and the public; and administrative applications and functions. Table 2 lists several of the key networks supporting the agency. Networks such as the DSN and the IONet send data to and receive data from spacecraft via satellite relays and ground antennae. Satellite telescopes accumulate status data such as the satellite’s position and health, and science data such as images and measurements of the celestial object being studied. Data are stored onboard the satellite and transmitted to Earth in batches via satellite relays and ground antennae. For example, figure 3 illustrates how several of these networks are connected and communicate with spacecrafts, such as the Hubble Space Telescope, the International Space Station, and the Cassini orbiter. As shown above, the Cassini orbiter sends data directly to the ground station antennae at the communication complexes in Australia, California, and Spain. The Hubble Space Telescope and the International Space Station send data to ground station antennae via the Tracking and Data Relay Satellite System to ground stations in New Mexico and Guam. Data received from spacecraft are stored at antenna facilities until they are distributed to the appropriate locations through ground communications such as IONet. When data are sent to spacecraft these pathways are reversed. Imperative to mission success is the protection of information and information systems supporting NASA. One of the agency’s most valuable assets is the technical and scientific knowledge and information generated by NASA’s research, science, engineering, technology, and exploration initiatives. The agency relies on computer networks and systems to collect, access, or process a significant amount of data that requires protection, including data considered mission-critical, proprietary, and/or sensitive but unclassified information. For example, the agencywide system controlling physical access to NASA facilities stores personally identifiable information such as fingerprints, Social Security numbers, and pay grades. an application for storing and sharing data such as computer-aided design and electrical drawings, and engineering documentation for Ares launch vehicles is being used by 7 agency data centers at 11 locations. Accordingly, effective information security controls are essential to ensuring that sensitive information is adequately protected from inadvertent or deliberate misuse, fraudulent use, improper disclosure or manipulation, and destruction. The compromise or loss of such information could cause harm to a person’s privacy or welfare, adversely impact economic or industrial institutions, compromise programs or operations essential to the safeguarding of our national interests, and weaken the strategic technological advantage of the United States. FISMA requires each federal agency to develop, document, and implement an agencywide information security program to provide security for the information and information systems that support the operations and assets of the agency, including those provided or managed by other agencies, contractors, or other sources. As described in table 3, NASA has designated certain senior managers at headquarters and its centers to fill the key roles in information security designated by FISMA and agency policy. Although NASA had implemented many information security controls to protect networks supporting its missions, weaknesses existed in several critical areas. Specifically, the centers did not consistently implement effective electronic access controls, including user accounts and passwords, access rights and permissions, encryption of sensitive data, protection of information system boundaries, audit and monitoring of security-relevant events, and physical security to prevent, limit, and detect access to their networks and systems. In addition, weaknesses in other information system controls, including managing system configurations and patching sensitive systems, further increase the risk to the information and systems that support NASA’s missions. A key reason for these weaknesses was that NASA had not yet fully implemented key elements of its information security program. As a result, highly sensitive personal, scientific, and other data were at an increased risk of unauthorized use, modification, or disclosure. A basic management objective for any organization is to protect the resources that support its critical operations from unauthorized access. Organizations accomplish this objective by designing and implementing controls that are intended to prevent, limit, and detect unauthorized access to computing resources, programs, information, and facilities. Inadequate access controls diminish the reliability of computerized information and increase the risk of unauthorized disclosure, modification, and destruction of sensitive information and disruption of service. Access controls include those related to (1) user identification and authentication, (2) user access authorizations, (3) cryptography, (4) boundary protection, (5) audit and monitoring, and (6) physical security. Weaknesses in each of these areas existed across the NASA environment. A computer system must be able to identify and authenticate different users so that activities on the system can be linked to specific individuals. When an organization assigns unique user accounts to specific users, the system is able to distinguish one user from another—a process called identification. The system must also establish the validity of a user’s claimed identity by requesting some kind of information, such as a password, that is known only by the user—a process known as authentication. The combination of identification and authentication— such as user account/password combinations—provides the basis for establishing individual accountability and for controlling access to the system. National Institute of Standards and Technology (NIST) states that (1) information systems should uniquely identify and authenticate users (or processes on behalf of users), (2) passwords should be implemented that are sufficiently complex to slow down attackers, (3) information systems should protect passwords from unauthorized disclosure and modification when stored and transmitted, and (4) passwords should be encrypted to ensure that the computations used in a dictionary or password cracking attack against a stolen password file cannot be used against similar password files. NASA did not adequately identify and authenticate users in systems and networks supporting mission directorates. For example, NASA did not configure certain systems and networks at two centers to have complex passwords. Specifically, these systems and networks did not always require users to create long passwords. In addition, users did not need passwords to access certain network devices. Furthermore, encrypted password and network configuration files were not adequately protected, and passwords were not encrypted. As a result, increased risk exists that a malicious individual could guess or otherwise obtain user identification and passwords to gain network access to NASA systems and sensitive data. Authorization is the process of granting or denying access rights and privileges to a protected resource, such as a network, system, application, function, or file. A key component of granting or denying access rights is the concept of “least privilege.” Least privilege is a basic principle for securing computer resources and data that means that users are granted only those access rights and permissions that they need to perform their official duties. To restrict legitimate users’ access to only those programs and files that they need in order to do their work, organizations establish access rights and permissions. “User rights” are allowable actions that can be assigned to users or to groups of users. File and directory permissions are rules that are associated with a particular file or directory, regulating which users can access it—and the extent of that access. To avoid unintentionally giving users unnecessary access to sensitive files and directories, an organization must give careful consideration to its assignment of rights and permissions. However, all three NASA centers we reviewed did not always sufficiently restrict system access and privileges to only those users that needed access to perform their assigned duties. For example, the centers did not always restrict access to sensitive files and control unnecessary remote access. In addition, NASA centers allowed shared accounts and group user IDs and did not restrict excessive user privileges. Furthermore, NASA centers did not effectively limit access to key network devices through access control lists. As a result, increased risk exists that users could gain inappropriate access to computer resources, circumvent security controls, and deliberately or inadvertently read, modify, or delete critical mission information. Cryptography underlies many of the mechanisms used to enforce the confidentiality and integrity of critical and sensitive information. A basic element of cryptography is encryption. Encryption can be used to provide basic data confidentiality and integrity by transforming plain text into ciphertext using a special value known as a key and a mathematical process known as an algorithm. The National Security Agency (NSA) recommends encrypting network services. If encryption is not used, sensitive information such as user ID and password combinations are susceptible to electronic eavesdropping by devices on the network when they are transmitted. In addition, the OMB has recommended that all federal agencies encrypt all data on mobile devices like laptops, unless the data has been determined to be nonsensitive. Although NASA has implemented cryptography, it was not always sufficient or used in transmitting sensitive information. For example, NASA centers did not always employ a robust encryption algorithm that complied with federal standards to encrypt sensitive information. The three centers we reviewed neither used encryption to protect certain network management connections, nor did they require encryption for authentication to certain internal services. Instead, the centers used unencrypted protocols to manage network devices, such as routers and switches. In addition, NASA had not installed full-disk encryption on its laptops at all three centers. As a result, sensitive data transmitted through the unclassified network or stored on laptop computers were at an increased risk of being compromised. Boundary protection controls logical connectivity into and out of networks and controls connectivity to and from network connected devices. Unnecessary connectivity to an organization’s network increases not only the number of access paths that must be managed and the complexity of the task, but the risk of unauthorized access in a shared environment. NIST guidance states that firewalls should be configured to provide adequate protection for the organization’s networks and that the transmitted information between interconnected systems should be controlled and regulated. Although NASA had employed controls to segregate sensitive areas of its networks and protect them from intrusion, it did not always adequately control the logical and physical boundaries protecting its information and systems. For example, NASA centers did not adequately protect their workstations and laptops from intrusions through the use of host-based firewalls. Furthermore, firewalls at the centers did not provide adequate protection for the organization’s networks, since they could be bypassed. In addition, the three centers had an e-mail server that allowed spoofed e- mail messages and potentially harmful attachments to be delivered to NASA. As a result, the hosts on these system networks were at increased risk of compromise or disruption from the other lower security networks. To establish individual accountability, monitor compliance with security policies, and investigate security violations, it is crucial to determine who has taken actions on the system, what these actions were, and when they were taken. According to NIST, when performing vulnerability scans, greater emphasis should be placed upon systems that are accessible from the Internet (e.g., Web and e-mail servers); systems that house important or sensitive applications or data (e.g., databases); or network infrastructure components (e.g., routers, switches, and firewalls). In addition, according to commercial vendors, running scanning software in an authenticated mode allows the software to detect additional vulnerabilities. NIST also states that the use of secure software development techniques, including source code review, is essential to preventing a number of vulnerabilities from being introduced into items such as a Web service. NASA requires that audit trails be implemented on NASA IT systems. Although NASA regularly monitored its unclassified network for security vulnerabilities, the monitoring was not always comprehensive. For example, none of the three centers we reviewed conducted vulnerability scans for such sensitive applications as databases. In addition, the centers did not conduct source code reviews. Furthermore, not all segments and protocols on center networks were effectively monitored by intrusion detection systems. Moreover, NASA did not always configure several database systems to enable auditing and monitoring of security-relevant events and did not adequately perform logging of authentication, authorization, and accounting activities. As a result, NASA may not detect certain vulnerabilities or unauthorized activities, leaving the network at increased risk of compromise or disruption. Until NASA establishes detailed audit logs for its systems at these facilities or compensating controls in cases where such logs are not feasible, it risks being unable to determine if malicious incidents are occurring and, after an event occurs, being unable to determine who or what caused the incident. Physical security controls are important for protecting computer facilities and resources from espionage, sabotage, damage, and theft. These controls restrict physical access to computer resources, usually by limiting access to the buildings and rooms in which the resources are housed and by periodically reviewing the access granted in order to ensure that it continues to be appropriate. NASA policy requires that its facilities and buildings be provided the level of security commensurate with the level of risk as determined by a vulnerability risk assessment. In addition, NASA policy requires enhanced security measures for its mission essential infrastructure such as computing facilities and data centers, including access control systems, lighting, and vehicle barriers such as bollards or jersey barriers. NIST policy also requires that federal agencies implement physical security and environmental safety controls to protect IT systems and facilities, as well as employees and contractors. These controls include protections to prevent excessive heat and fires or unnecessary water damage. NASA had various protections in place for its IT resources. It effectively secured many of its sensitive areas and computer equipment and takes other steps to provide physical security. For example, all three NASA centers issued electronic badges to help control access to many of their sensitive and restricted areas. The agency also maintains liaisons with law enforcement agencies to help ensure additional security backup is available if necessary and to facilitate the accurate flow of timely security information among appropriate government agencies. However, NASA’s computing facilities may be vulnerable to attack because of weaknesses in controls over physical access points, including designated entry and exit points to the facilities where information systems reside. NASA also neither enforced stringent physical access measures for, and authorizations to, areas within a facility, nor did it maintain and review at least annually a current list of personnel with access to all IT-intensive facilities and properly authenticate visitors to these facilities. In addition, we were only able to obtain evidence that risk assessments were performed for 11 of the 24 NASA buildings we visited, which contained significant and sensitive IT resources. NASA also did not fully implement enhanced security measures for its mission essential infrastructure such as computing facilities and data centers. To illustrate, retractable bollards that protect delivery doors, generators, and fuel tanks at the data and communication centers were not operable and were in the “open” retracted position. NASA also did not fully follow NIST safety and security guidance. In addition, a data center that houses a large concentration of sensitive IT equipment including the laboratory’s supercomputer had “wet pipe” automatic sprinkler protection. This type of protection presents risks of water leaks that could do considerable damage to the sensitive and expensive computer equipment in the event of a fire. In addition, this data center’s critical cooling equipment and fans located at the rear of the facility were not separately enclosed and protected. Although the facility’s perimeter is fenced, an unauthorized individual could scale the fence and damage or sabotage the cooling equipment. Because areas containing sensitive IT and support equipment were not adequately protected, NASA has less assurance that computing resources are protected from inadvertent or deliberate misuse including sabotage, vandalism, theft, and destruction. In addition to access controls, other important controls should be in place to ensure the security and reliability of an organization’s information. These controls include policies, procedures, and control techniques to (1) appropriately segregate incompatible duties and (2) manage system configurations and implement patches. Weaknesses in these areas could increase the risk of unauthorized use, disclosure, modification, or loss of NASA’s mission sensitive information. Segregation of duties refers to the policies, procedures, and organizational structure that help ensure that one individual cannot independently control all key aspects of a process or computer-related operation and thereby gain unauthorized access to assets or records. Often segregation of incompatible duties is achieved by dividing responsibilities among two or more organizational groups. Dividing duties among two or more individuals or groups diminishes the likelihood that errors and wrongful acts will go undetected because the activities of one individual or group will serve as a check on the activities of the other. Inadequate segregation of duties increases the risk that erroneous or fraudulent transactions could be processed, improper program changes implemented, and computer resources damaged or destroyed. NASA did not adequately segregate incompatible duties. For example, all network users at two centers we reviewed had administrative privileges to their local computer and could install unapproved software. Only system administrators should have these privileges. As a consequence, increased risk exists that users could perform unauthorized system activities without detection. Patch management is a critical process that can help alleviate many of the challenges of securing computing systems. As vulnerabilities in a system are discovered, attackers may attempt to exploit them, possibly causing significant damage. Malicious acts can range from defacing Web sites to taking control of entire systems, thereby being able to read, modify, or delete sensitive information; disrupt operations; or launch attacks against other organizations’ systems. After a vulnerability is validated, the software vendor may develop and test a patch or work-around to mitigate the vulnerability. Incident response groups and software vendors issue information updates on the vulnerability and the availability of patches. Although NASA had implemented innovative techniques to maintain system configurations and install patches, shortcomings existed. For example, all three NASA centers had not applied a critical operating system patch or patches for a number of general third-party applications. As a result, NASA had limited assurance that all needed patches were applied to critical system resources, increasing the risk of exposing critical and sensitive unclassified data to unauthorized access. Furthermore, although the three centers had configured their e-mail systems to prevent many common cyber attacks, they were still vulnerable to attack because their systems allowed various file types as e-mail attachments. These files could be used to install malicious software onto an unsuspecting user’s workstation, potentially compromising the network. As a result, increased risk exists that an attacker could exploit known vulnerabilities in these applications to execute malicious code and gain control of or compromise a system. A key reason for these weaknesses is that although NASA has made important progress in implementing the agency’s information security program, it has not effectively or fully implemented an agencywide information security program. FISMA requires agencies to develop, document, and implement an information security program that, among other things, includes periodic assessments of the risk and magnitude of harm that could result from the unauthorized access, use, disclosure, disruption, modification, or destruction of information and information systems; policies and procedures that (1) are based on risk assessments, (2) cost effectively reduce information security risks to an acceptable level, (3) ensure that information security is addressed throughout the life cycle of each system, and (4) ensure compliance with applicable requirements; plans for providing adequate information security for networks, facilities, periodic testing and evaluation of the effectiveness of information security policies, procedures, and practices, to be performed with a frequency depending on risk, but no less than annually, and that includes testing of management, operational, and technical controls for every system identified in the agency’s required inventory of major information systems; a process for planning, implementing, evaluating, and documenting remedial action to address any deficiencies in its information security policies, procedures, or practices; plans and procedures to ensure continuity of operations for information systems that support the operations and assets of the agency; and procedures for detecting, reporting, and responding to security incidents. In addition, FISMA states the agency information security program applies to the information and information systems provided or managed by contractors or other sources. We identified a number of shortcomings in key program activities. For example, NASA had not always (1) fully assessed information security risks; (2) fully developed and documented security policies and procedures; (3) included key information in security plans; (4) conducted comprehensive tests and evaluation of its information system controls; (5) tracked the status of plans to remedy known weaknesses; (6) planned for contingencies and disruptions in service; (7) maintained capabilities to detect, report, and respond to security incidents; and (8) incorporated important security requirements in its contract with JPL. Until all key elements of its information security program are fully and consistently implemented, NASA will have limited assurance that new weaknesses will not emerge and that sensitive information and assets are adequately safeguarded from inadvertent or deliberate misuse, improper disclosure, or destruction. A comprehensive risk assessment should be the starting point for developing or modifying an agency’s security policies and security plans. Such assessments are important because they help to make certain that all threats and vulnerabilities are identified and considered, that the greatest risks are addressed, and that appropriate decisions are made regarding which risks to accept and which to mitigate through security controls. Appropriate risk assessment policies and procedures should be documented and based on the security categorizations described in FIPS Publication 199. OMB directs federal agencies to consider risk when deciding what security controls to implement. OMB states that a risk- based approach is required to determine adequate security, and it encourages agencies to consider major risk factors, such as the value of the system or application, threats, vulnerabilities, and the effectiveness of current or proposed safeguards. Identifying and assessing physical security risks are also essential steps in determining what information security controls are required. NASA policy states that vulnerability risk assessments for buildings and facilities are to be performed at least every 3 years. NASA had generally implemented procedures for assessing its security risks and conducted risk assessments for the five systems and networks we reviewed. It had also determined security categories for these systems and networks. In addition, NASA had developed an executive threat summary on cyber issues facing the agency. Also, NASA’s Security Operations Center (SOC) regularly issued threat analysis reports and distributed them to offices within NASA responsible for security. However, NASA had not fully assessed its risks. For example, it had not conducted a comprehensive agencywide risk assessment that included mission-related systems and applications. In addition, one center we reviewed did not prepare an overall network risk assessment that clearly articulated the known vulnerabilities identified in the security plans and waivers. Furthermore, the waivers were not elevated or aggregated and documented into an overall risk management plan. NASA also could not demonstrate that it conducted vulnerability risk assessments for 13 of the 24 buildings we visited that contained significant and sensitive information resources. NASA staff stated that some of the 13 buildings may have had risk assessments performed in the past, but they could not provide copies of the assessments or evidence to support these assertions. As a result, NASA has limited assurance that computing resources are consistently and effectively protected from inadvertent or deliberate misuse including fraud or destruction. Another key task in developing an effective information security program is to establish and implement risk-based policies, procedures, and technical standards that govern security over an agency’s computing environment. If properly implemented, policies and procedures should help reduce the risk that could come from unauthorized access or disruption of services. Because security policies and procedures are the primary mechanisms through which management communicates views and requirements, it is important that these policies and procedures be established and documented. FISMA requires agencies to develop and implement policies and procedures to support an effective information security program. NIST also issued security standards and related guidance to help agencies implement security controls, including appropriate information security policies and procedures. NASA developed and documented several information security policies and procedures. For example, NASA established standard operating processes that had been successful in producing a number of IT procedures relating to certification and accreditation. However, NASA had not always included all the necessary elements in its security policies and procedures, as illustrated by the following examples: The agency did not have a policy for malware incident handling and prevention. Although NASA defined some security roles, it did not define all necessary roles and responsibilities for incident response and detection. Presently the only formal role for managing incidents as defined by NASA policy is the Information Technology Security Manager. However, NASA policy did not clearly define roles and responsibilities for incident response within NASA, such as an intrusion analyst or incident response manager. NASA had not updated the policy for incident handling to reflect the current environment. Although NASA has developed policy directives pertaining to incident handling that all NASA centers are required to follow, these documents had not been updated to reflect the November 2008 establishment of the SOC. Physical and environmental policies for the protection of NASA assets were not adequately defined. NASA’s policies do not adequately describe physical access controls such as authorizing, controlling, and monitoring physical access to sensitive locations. For example, regarding monitoring, the agency’s policy does not clearly require that officials maintain and review at least annually a current list of personnel with access to all IT- intensive facilities. Additionally, NASA’s policies did not provide clear and consistent guidance for developing and implementing environmental safety controls. For instance, the agency’s policies and procedures lacked information on fire protection and emergency power shutoff. NASA IT and physical security policy staff acknowledged these shortcomings and stated that new policies are being or will be drafted during this calendar year and should be approved by NASA management around the end of calendar year 2010. Until these policies are fully developed and documented across all agency centers, NASA has less assurance that computing resources are consistently and effectively protected from inadvertent or deliberate misuse, including fraud or destruction. An objective of system security planning is to improve the protection of IT resources. A system security plan provides a complete and up-to-date overview of the system’s security requirements and describes the controls that are in place—or planned—to meet those requirements. OMB Circular A-130 specifies that agencies develop and implement system security plans for major applications and general support systems and that these plans address policies and procedures for providing management, operational, and technical controls. NIST guidance states that these plans should be updated as system events trigger the need for revision in order to accurately reflect the most current state of the system. NIST guidance requires that all security plans be reviewed and, if appropriate, updated at least annually. NASA generally prepared and documented security plans for the five systems and networks we reviewed. In addition, NASA has developed and mandated the use of the Risk Management System as the authoritative source for the creation and storage of system security plans and documentation. Most notably, JPL also employed a real-time Certification and Accreditation document repository system, which facilitates a more repeatable process and ensures consistency and correctness. However, NASA did not always include key information in system security plans. For example, NASA did not always update one system security plan with the results from its network risk assessment and threat analysis. In addition, system interconnection security agreements were not always signed for all external connections. Specifically, a center did not have signed interconnection security agreements for any connections with its partners and stakeholders. Furthermore, interconnection security agreements for one network were still pending. Without a security plan that describes security requirements and specific threats as identified in the risk assessment, and without having signed interconnection security agreements, NASA networks remain vulnerable to threats. A key element of an information security program is to test and evaluate policies, procedures, and controls to determine whether they are effective and operating as intended. This type of oversight is a fundamental element of a security program because it demonstrates management’s commitment to the program, reminds employees of their roles and responsibilities, and identifies areas of noncompliance and ineffectiveness. Analyzing the results of security reviews provides security specialists and business managers with a means of identifying new problem areas, reassessing the appropriateness of existing controls (management, operational, technical), and identifying the need for new controls. FISMA requires that the frequency of tests and evaluations be based on risks and occur no less than annually. NASA commissioned penetration testing using a rotational audit approach that covered various NASA centers. The scope of the tests included internal and external network-based penetration testing, Web application testing against center-selected Web sites, war-driving to identify rogue and unprotected wireless access points, configuration testing on center workstations and networking devices, searches for publicly available sensitive data, and social engineering scenarios against help desk staff. Although NASA conducted system security testing and evaluating on the five systems and networks we reviewed, the tests were not always comprehensive. For instance, NASA did not test all relevant security controls and did not identify certain weaknesses that we identified during our review. For example, our review revealed problems with a firewall that were not identified by a test, including the fact that the firewall can be bypassed. In addition, the network documentation highlighted managerial control issues, such as the lack of policy, but insufficient or limited attention was paid to testing weaknesses in operational and technical controls. As a result, NASA could be unaware of undetected vulnerabilities in its networks and systems and has reduced assurance that its controls are being effectively implemented. Remedial action plans, also known as plans of action and milestones (POA&M), can help agencies identify and assess security weaknesses in information systems and set priorities and monitor progress in correcting them. NIST guidance states that each federal civilian agency must report all incidents and internally document remedial actions and their impact. In addition, NASA policy states that all master and subordinate IT system POA&Ms should be tracked and reported to the NASA CIO in a timely manner so that corrective actions can be taken. Although NASA has developed and implemented a remedial action process, it did not always prepare remedial action plans for known control deficiencies or report the status of corrective actions in a centralized remediation tracking system maintained by the NASA CIO. For example, NASA did not develop POA&Ms to correct several weaknesses documented in one system’s security assessment or to address remediation threats documented in its risk assessment. In addition, the NASA centers we reviewed did not always report remedial action plans and the status of corrective actions into the central Headquarters Risk Management System used for POA&Ms. Consequently, senior management officials were not always aware of control weaknesses that still remained outstanding. Without an effective remediation program, identified vulnerabilities may not be resolved in a timely manner, thereby allowing continuing opportunities for unauthorized individuals to exploit these weaknesses and gain access to sensitive information and systems. Contingency planning is a critical component of information protection. If normal operations are interrupted, network managers must be able to detect, mitigate, and recover from service disruptions while preserving access to vital information. Therefore, a contingency plan details emergency response, backup operations, and disaster recovery for information systems. It is important that these plans be clearly documented, communicated to potentially affected staff, and updated to reflect current operations. NIST also requires that all of an agency’s systems have a contingency plan and that the plans address, at a minimum, identification and notification of key personnel, plan activation, system recovery, and system reconstitution. NASA guidance states that contingency plans should describe an alternate backup site in a geographic area that is unlikely to be negatively affected by the same disaster event (e.g., weather-related impacts or power grid failure) as the organization’s primary site. The guidance also states that contingency plans should include contact information for disaster recovery personnel. NASA had developed contingency plans for the five systems and networks we reviewed. However, shortcomings existed in several plans. Specifically, (1) NASA did not approve the contingency plans for one network and one system we reviewed; (2) it did not include contact information for disaster recovery personnel at a center, even though their roles and responsibilities for disaster recovery were described; (3) NASA did not describe an alternate backup site for a center in a geographic area outside of the primary site, and had not designated backup facilities for a network we reviewed; and (4) the contingency plan for a system we reviewed did not follow NASA’s guidance on contingency planning, since it did not include review and approval signatures, information contact(s) and line of succession, and damage assessment procedures. As a result, NASA is at a greater risk for major service disruptions with respect to its important mission networks in the event of a disaster to the primary facility. Even strong controls may not block all intrusions and misuse, but organizations can reduce the risks associated with such events if they take steps to promptly detect and respond to them before significant damage is done. NIST offers the following guidance for establishing an effective computer security incident response capability. Organizations should create an incident response policy, and use it as the basis for incident response procedures, that defines which events are considered incidents, establishes the organizational structure for incident response, defines roles and responsibilities, and lists the requirements for reporting incidents, among other items. In addition, organizations should acquire the necessary tools and resources for incident handing, including communications, facilities, and the analysis of hardware and software. NASA has established a computer security incident handling project to respond to incidents. As part of this project, NASA has implemented a SOC, within Ames Research Center, which is the central coordination point for NASA’s incident handling program and for reporting of incidents to the United States Computer Emergency Readiness Team (US-CERT). The SOC began operations in November 2008 and is expected to enhance prevention and provide early detection of security incidents and coordinate agency-level information related to NASA’s IT security posture. The SOC has implemented an agency hotline for security incidents and a centralized incident management system for the coordination, tracking, and reporting of agency incidents. It is currently improving its infrastructure to support detection, notification, investigation, and response to incidents in a timely manner. In addition to the SOC, the three centers that we reviewed had their own teams of incident responders that addressed and tracked incidents at their centers. However, NASA’s capabilities to detect, report, and respond to security incidents remain limited. The following are examples: The agency is not using a consistent definition of an incident. Responders at several centers stated they were following the NIST/US-CERT definition of an incident, which makes no distinction between an event and an incident. Although a center’s standard operating procedure did not include a formal definition of a computer security incident, the center personnel stated that incidents are only those that are confirmed. However, a definition of what constitutes a “confirmed” incident was not provided. The organizational structure for incident response roles and responsibilities was outdated since it assigned central coordination and analysis of incidents to an organization that no longer existed. Although the SOC has developed an incident management plan, policies, and procedures for responding to incidents, they were in draft and had not been distributed to all the centers. Although two of the centers support mission related operations that operate 24x7, the two centers’ incident response teams were not staffed around the clock. The business impacts of incidents were not adequately specified in NASA incident documentation. NASA incident documentation contains references to the fact that data subject to International Traffic in Arms Regulations were stolen along with a laptop. However, the precise data that were lost were described only in very general terms so that the business impacts are not known. Moreover, although agency officials stated that conducting root cause analyses is required and part of the standard incident response workflow, there were many incidents for which a detailed post-incident analysis was not performed. In addition, weaknesses in NASA’s technical controls impact its incident handling and detection controls. For example, two centers we reviewed did not employ host-based firewalls on their workstations, laptops, or devices. In addition, one network had limited incident detection systems to detect malicious traffic coming from its internal and off-site connections. Moreover, another network had no internal incident detection system in place to monitor traffic, with the partial exception of network incident detection coverage of ingress/egress for it. Furthermore, one center had not adequately established and implemented tools and processes to ensure timely detection of security incidents. As a result, there is a heightened risk that NASA may not be able to detect, contain, eradicate, or recover from incidents, and improve the incident handling process. The agencywide information security program required by FISMA applies not only to information systems used or operated by an agency but also to information systems used or operated by a contractor of an agency or other agency on behalf of an agency. In addition, the Federal Acquisition Regulation (FAR) requires that federal agencies prescribe procedures for ensuring that agency planners on IT acquisitions comply with the IT security requirements of FISMA, OMB’s implementing policies, including appendix III of OMB Circular A-130, and guidance and standards from NIST. Appropriate policies and procedures should be developed, implemented, and monitored to ensure that the activities performed by external third parties are documented, agreed to, implemented, and monitored for compliance. However, NASA did not adequately incorporate information security requirements in its contract with the JPL contractor. Although the contract for JPL specified adherence to certain NASA security policies, it did not require the contractor to implement key elements of an information security program. For example, the following NASA and FISMA requirements are not specifically referenced in the JPL contract: Periodic testing and evaluation of the effectiveness of information security policies, procedures, and practices performed with a frequency depending on risk, but not less than annually, and including testing of management, operational, and technical controls for every system. A process for planning, implementing, evaluating, and documenting remedial actions to address any deficiencies in the information security policies, procedures, and practices of the agency. Procedures for detecting, reporting, and responding to security incidents. Plans and procedures to ensure continuity of operations for information systems that support the operations and assets of the agency. In addition, NASA did not incorporate provisions in the contract to allow it to perform effective oversight of the contractor’s implementation of the security controls and program. For example, the JPL contract did not recognize the oversight roles of the NASA Administrator, the NASA CIO, the senior agency information security officer and other senior NASA managers as defined in NASA’s policy. As a result, NASA faces a range of risks from contractors and other users with privileged access to NASA’s systems, applications, and data since contractors that provide users with privileged access to agency/entity systems, applications, and data can introduce risks to their information and information systems. NASA has experienced numerous cyber attacks on its networks and systems in recent years. During fiscal years 2007 and 2008, NASA reported 1,120 security incidents to US-CERT in the following five US-CERT- defined categories: Unauthorized access: Gaining logical or physical access without permission to a federal agency’s network, system, application, data, or other resource. Denial of service: Preventing or impairing the normal authorized functionality of networks, systems, or applications by exhausting resources. This activity includes being the victim of or participating in a denial of service attack. Malicious code: Installing malicious software (e.g., virus, worm, Trojan horse, or other code-based malicious entity) that infects an operating system or application. Agencies are not required to report malicious logic that has been successfully quarantined by antivirus software. Improper usage: Violating acceptable computing use policies. Scans/probes/attempted access: Accessing or identifying a federal agency computer, open ports, protocols, service, or any combination of these for later exploit. This activity does not directly result in a compromise or denial of service. As noted in figure 4, the two most prevalent types of incidents reported by NASA were malicious code and unauthorized access. A NASA report stated that the number of malicious code attacks (839) was the highest experienced by any of the federal agencies, which accounted for over one-quarter of the total number of malicious code attacks directed at federal agencies during this period. According to an official at the US- CERT, NASA’s high profile makes the agency an attractive target for hackers seeking recognition, or for nation-state sponsored cyber spying. The impact of these and more recent incidents can be significant. The following examples are illustrative: In 2009, NASA reported incidents involving unauthorized access to sensitive data. For example, one center reported the theft of a laptop containing data subject to International Traffic in Arms Regulations. Stolen data included roughly 3,000 files of unencrypted International Traffic in Arms Regulations data with information for Hypersonic Wind Tunnel testing for the X-51 scramjet project and possibly personally identifiable information. Another center reported the theft of a laptop containing thermal models, review documentation, test plans, test reports, and requirements documents pertaining to NASA’s Lunar Reconnaissance Orbiter and James Webb Space Telescope projects. The incident report does not indicate whether this lost data was unencrypted or encrypted or how the incident was resolved. Significantly, these were not isolated incidents since NASA reported 209 incidents of unauthorized access to US- CERT during fiscal years 2007 and 2008. One center was alerted by the NASA SOC in February 2009 about traffic associated with a Seneka Rootkit Bot. In this case, NASA found that 82 NASA devices had been communicating with a malicious server since January 2009. A review of the data revealed that most of these devices were communicating with a server in the Ukraine. By March 2009, three centers were also infected with the bot attack. In October 2007, a total of 86 incidents related to the Zonebac Trojan were reported by NASA centers. This particular form of malware is capable of disabling security software and downloading and running other malicious software at the whim of the attacker. US-CERT reported in January 2008 on NASA’s ongoing problems with Zonebac and other malware infestations and recommended that the agency employ consistent patching and user education practices to prevent such infections from occurring. In July 2008, NASA found several hosts infected with the Coreflood Trojan that is capable of frequently updating itself and stealing a large number of user credentials that can be used to log onto other machines within a domain. Investigation revealed that NASA computers were infected and communicating with a hostile command and control server. These attacks can result in damage to applications, data, or operating systems; disclosure of sensitive information; propagation of malware; use of affected systems as bots; an unavailability of systems and services; and a waste of time, money, and labor. In response to these and other attacks, NASA has enhanced its incident response capabilities and computer defensive capabilities at NASA’s centers. For example, the three centers that we reviewed had their own teams of incident responders that addressed and tracked incidents at their centers. In addition, the SOC was established in 2008 to enhance prevention and provide early detection of security incidents and coordinate agency-level information related to NASA’s security posture. The SOC has implemented an agency hotline for security incidents and an incident management system for the coordination and tracking of agency security incidents. It is currently improving its infrastructure to support detection, notification, investigation, and response to security incidents in a timely manner. Despite actions to address security incidents, NASA remains vulnerable to similar incidents going forward. The control vulnerabilities and program shortfalls that we identified collectively increase the risk of unauthorized access to NASA’s sensitive information, as well as inadvertent or deliberate disruption of its system operations and services. They make it possible for intruders, as well as government and contractor employees, to bypass or disable computer access controls and undertake a wide variety of inappropriate or malicious acts. As a result, increased and unnecessary risk exists that sensitive information will be subject to unauthorized disclosure, modification, and destruction and that mission operations could be disrupted. Information security weaknesses at NASA impair the agency’s ability to ensure the confidentiality, integrity, and availability of sensitive information. The systems supporting NASA’s mission directorates at the three centers we reviewed have vulnerabilities in information security controls that place mission sensitive information, scientific, other data, and information systems at increased risk of compromise. A key reason for these vulnerabilities is that NASA has not yet fully implemented its information security program to ensure that controls are appropriately designed and operating effectively. NASA’s high profile and cutting edge technology makes the agency an attractive target for hackers seeking recognition, or for nation-state sponsored cyber spying. Thus, it is vital that attacks on NASA computer systems and networks are detected, resolved, and reported in a timely fashion and that the agency has effective security controls in place to minimize its vulnerability to such attacks. Despite actions to address previous security incidents, the control vulnerabilities and program shortfalls we identified indicate that NASA remains vulnerable to future incidents. These weaknesses could allow intruders, as well as government and contractor employees, to bypass or disable computer access controls and undertake a wide variety of inappropriate or malicious acts. Until NASA mitigates identified control vulnerabilities and fully implements its information security program, the agency will be at risk of unauthorized disclosure, modification, and destruction of its sensitive information and disruption of critical mission operations. To assist NASA in improving the implementation of its agencywide information security program, we recommend that the NASA Administrator direct the NASA CIO to take the following eight actions: Develop and implement comprehensive and physical risk assessments that include mission-related systems and applications and known vulnerabilities identified in the security plans and waivers. Develop and fully implement security policies and procedures for malware, incident handling roles and responsibilities, and physical environmental protection. Include key information for system security plans such as information from risk assessments and signed system interconnection security agreements. Conduct sufficient or comprehensive security testing and evaluation of all relevant security controls including management, operational, and technical controls. Develop remedial action plans to address any deficiencies and ensure that master and subordinate IT system items are tracked and reported to the agency CIO in a timely manner so that corrective actions can be taken. Update contingency plans to include key information such as, contact information and approvals, and describe an alternate backup site in a geographic area that is unlikely to be negatively affected by the same disaster event. Implement an adequate incident detection program to include a consistent definition of an incident, incident roles and responsibilities, resources to operate the program, and business impacts of the incidents. Include all necessary security requirements in the JPL contract. In a separate report with limited distribution, we are also making 179 recommendations to address the 129 weaknesses identified during this audit to enhance NASA’s access controls. In providing written comments on a draft of this report (reprinted in app. IV), the NASA Deputy Administrator concurred with our recommendations and noted that many of the recommendations are currently being implemented as part of an ongoing strategic effort to improve information technology management and IT security program deficiencies. In addition, she stated that NASA will continue to mitigate the information security weaknesses identified in our report. The actions identified in the Deputy Administrator’s response will, if effectively implemented, improve the agency’s information security program. We are sending copies to interested congressional committees, the Office of Management and Budget, the NASA Administrator, the NASA Inspector General and other interested parties. The report also is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact Gregory C. Wilshusen at (202) 512-6244 or Dr. Nabajyoti Barkakati at (202) 512-4499. We can also be reached by e-mail at [email protected] or [email protected]. GAO staff who made major contributions to this report are listed in appendix V. The objectives of our review were to (1) determine the effectiveness of the National Aeronautics and Space Administration’s (NASA) information security controls in protecting the confidentiality, integrity, and availability of its networks supporting mission directorates and (2) assess the vulnerabilities identified during the audit in the context of NASA’s prior security incidents and corrective actions. To determine the effectiveness of security controls, we reviewed networks at three centers to gain an understanding of the overall network control environment, identified its interconnectivity and control points, and examined controls for NASA networks. Using our Federal Information System Controls Audit Manual, which contains guidance for reviewing information system controls that affect the confidentiality, integrity, and availability of computerized information, National Institute of Standards and Technology (NIST) standards and guidance, and NASA’s policies, procedures, practices, and standards, we evaluated controls by developing an accurate understanding of the overall network architecture and examining configuration settings and access controls for routers, network management servers, switches, and firewalls; reviewing the complexity and expiration of password settings to determine if password management was enforced; analyzing users’ system authorizations to determine whether they had more permissions than necessary to perform their assigned functions; observing methods for providing secure data transmissions across the network to determine whether sensitive data were being encrypted; observing whether system security software was logging successful observing physical access controls to determine if computer facilities and resources were being protected from espionage, sabotage, damage, and theft; inspecting key servers and workstations to determine whether critical patches had been installed or were up-to-date; and examining access responsibilities to determine whether incompatible functions were segregated among different individuals. Using the requirements identified by the Federal Information Security Management Act of 2002 (FISMA), which establishes key elements for an effective agencywide information security program, we evaluated five NASA systems and networks by analyzing NASA’s policies, procedures, practices, standards, and resources to determine their effectiveness in providing guidance to personnel responsible for securing information and information systems; reviewing NASA’s risk assessment process and risk assessments to determine whether risks and threats were documented consistent with federal guidance; analyzing security plans to determine if management, operational, and technical controls were in place or planned and that security plans reflected the current environment; analyzing NASA’s procedures and results for testing and evaluating security controls to determine whether management, operational, and technical controls were sufficiently tested at least annually and based on risk; examining remedial action plans to determine whether they addressed vulnerabilities identified in NASA’s security testing and evaluations; examining contingency plans to determine whether those plans contained essential information, reflected the current environment, and had been tested to assure their sufficiency; reviewing incident detection and handling policies, procedures, and reports to determine the effectiveness of the incident handling program; and analyzing whether security requirements were implemented effectively by the contractor. We also discussed with key security representatives and management officials whether information security controls were in place, adequately designed, and operating effectively. To assess NASA’s vulnerabilities in the context of prior incidents and corrective actions, we reviewed and analyzed United States Computer Emergency Readiness Team (US-CERT) data on NASA’s reported incidents, examined NASA security incident reports in the last two fiscal years, inspected plans for corrective actions and the implementation of the Security Operations Center, and interviewed NASA officials on how NASA corrected identified vulnerabilities. We performed our audit at NASA headquarters in Washington, D.C.; Goddard Space Flight Center in Greenbelt, Maryland; the Jet Propulsion Laboratory in Pasadena, California; the Marshall Space Flight Center in Huntsville, Alabama; and Ames Research Center at Moffett Field, California, from November 2008 to October 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Provides leadership in astrobiology, small-satellites, the search for habitable planets, supercomputing, intelligent/adaptive systems, advanced thermal protection, and airborne astronomy. Performs flight research and technology integration to revolutionize aviation and pioneer aerospace technology; validates space exploration concepts; conducts airborne remote sensing, and science missions; enables airborne astrophysics observation missions to discover the origin, structure, evolution, and destiny of the universe; and supports operations of the Space Shuttle and the International Space Station. Develops critical space flight systems and technologies to advance the exploration of our solar system and beyond while maintaining leadership in aeronautics. In partnership with U.S. industries, universities, and other government institutions, research and development efforts focus on advancements in propulsion, power, communications, nuclear, and human-related aerospace systems. Expands the knowledge of Earth and its environment, the solar system, and the universe through observations from space. The center also conducts scientific investigations, develops and operates space systems, and advances essential technologies. Hosts and staffs program and project offices; selects and trains astronauts; manages and conducts projects that build, test, and integrate human-rated systems for transportation, habitation, and working in space; and plans and operates human space flight missions. Programs that Johnson Space Center supports include the Space Shuttle Program, the International Space Station Program, and the Constellation Program. Performs preflight processing, launch, landing, and recovery of the agency’s human-rated spacecraft and launch vehicles; the assembly, integration, and processing of International Space Station elements and flight experiments; and the acquisition and management of Expendable Launch Vehicles for other agency spacecraft. The center leads the development of ground systems supporting human-rated spacecraft and launch vehicle hardware elements and hosts the manufacturing of the Orion Crew Exploration Vehicles. Pioneers the future in space exploration, scientific discovery, and aeronautics through research and development of technology, scientific instruments and investigations, and exploration systems. Performs systems engineering and integration for both human and robotic missions. Marshall performs engineering design, development, and integration of the systems required for space operations, exploration, and science. The center also manages the Michoud Assembly Facility, which supports the unique manufacturing and assembly needs of current and future NASA programs and provides critical telecommunications and business systems for the agency. Implements NASA’s mission in areas assigned by three agency mission directorates. The center manages and operates Rocket Propulsion Test facilities and support infrastructure for the Space Operations and Exploration Systems mission directorates, serves as Systems Engineering Center for and manages assigned Applied Sciences program activities for the Science mission directorate, and serves as federal manager and host agency of a major government multiagency center. A contractor-operated federally funded research and development center that supports NASA’s strategic goals by exploring our solar system; establishing a continuous permanent robotic presence at Mars to discover its history and habitability; making critical measurements and models to better understand the solid Earth, oceans, atmosphere, and ecosystems, and their interactions; conducting observations to search for neighboring solar systems and Earth-like planets, and help understand formation, evolution, and composition of the Universe; conducting communications and navigation for deep space missions; providing support that enables human exploration of the Moon, Mars, and beyond; and collaborating with other federal and state government agencies and commercial endeavors. In addition to the individuals named above, West Coile and William Wadsworth (Assistant Directors), Edward Alexander, Angela Bell, Mark Canter, Saar Dagani, Kirk Daubenspeck, Neil Doherty, Patrick Dugan, Denise Fitzpatrick, Edward Glagola Jr., Tammi Kalugdan, Vernetta Marquis, Sean Mays, Lee McCracken, Kevin Metcalfe, Duc Ngo, Donald Sebers, Eugene Stevens IV, Michael Stevens, Henry Sutanto, Christopher Warweg, and Jayne Wilson made key contributions to this report.
The National Aeronautics and Space Administration (NASA) relies extensively on information systems and networks to pioneer space exploration, scientific discovery, and aeronautics research. Many of these systems and networks are interconnected through the Internet, and may be targeted by evolving and growing cyber threats from a variety of sources. GAO was directed to (1) determine whether NASA has implemented appropriate controls to protect the confidentiality, integrity, and availability of the information and systems used to support NASA's mission directorates and (2) assess NASA's vulnerabilities in the context of prior incidents and corrective actions. To do this, GAO examined network and system controls in place at three centers; analyzed agency information security policies, plans, and reports; and interviewed agency officials. Although NASA has made important progress in implementing security controls and aspects of its information security program, it has not always implemented appropriate controls to sufficiently protect the confidentiality, integrity, and availability of the information and systems supporting its mission directorates. Specifically, NASA did not consistently implement effective controls to prevent, limit, and detect unauthorized access to its networks and systems. For example, it did not always sufficiently (1) identify and authenticate users, (2) restrict user access to systems, (3) encrypt network services and data, (4) protect network boundaries, (5) audit and monitor computer-related events, and (6) physically protect its information technology resources. In addition, weaknesses existed in other controls to appropriately segregate incompatible duties and manage system configurations and implement patches. A key reason for these weaknesses is that NASA has not yet fully implemented key activities of its information security program to ensure that controls are appropriately designed and operating effectively. Specifically, it has not always (1) fully assessed information security risks; (2) fully developed and documented security policies and procedures; (3) included key information in security plans; (4) conducted comprehensive tests and evaluation of its information system controls; (5) tracked the status of plans to remedy known weaknesses; (6) planned for contingencies and disruptions in service; (7) maintained capabilities to detect, report, and respond to security incidents; and (8) incorporated important security requirements in its contract with the Jet Propulsion Laboratory. Despite actions to address prior security incidents, NASA remains vulnerable to similar incidents. NASA networks and systems have been successfully targeted by cyber attacks. During fiscal years 2007 and 2008, NASA reported 1,120 security incidents that have resulted in the installation of malicious software on its systems and unauthorized access to sensitive information. To address these incidents, NASA established a Security Operations Center in 2008 to enhance prevention and provide early detection of security incidents and coordinate agency-level information related to its security posture. Nevertheless, the control vulnerabilities and program shortfalls, which GAO identified, collectively increase the risk of unauthorized access to NASA's sensitive information, as well as inadvertent or deliberate disruption of its system operations and services. They make it possible for intruders, as well as government and contractor employees, to bypass or disable computer access controls and undertake a wide variety of inappropriate or malicious acts. As a result, increased and unnecessary risk exists that sensitive information is subject to unauthorized disclosure, modification, and destruction and that mission operations could be disrupted.
The Recovery Act provided DOE more than $43.2 billion, including $36.7 billion for projects and activities and $6.5 billion in borrowing authority. Of the $36.7 billion for projects and activities, almost half—$16.8 billion— was provided to the Office of Energy Efficiency and Renewable Energy for projects intended to improve energy efficiency, build the domestic renewable energy industry, and restructure the transportation industry to increase global competitiveness. The Recovery Act also provided $6 billion to the Office of Environmental Management for nuclear waste cleanup projects, $4.5 billion to the Office of Electricity Delivery and Energy Reliability for electric grid modernization, $4 billion to the Loan Guarantee Program Office to support loan guarantees for renewable energy and electric power transmission projects, $3.4 billion to the Office of Fossil Energy for carbon capture and sequestration efforts, and $2 billion to the Office of Science and the Advanced Research Projects Agency-Energy for advanced energy technology research. As of February 28, 2010, DOE reported that it had obligated $25.7 billion (70 percent) and reported expenditures of $2.5 billion (7 percent) of the $36.7 billion it received under the Recovery Act for projects and activities (see table 1). By comparison, as of December 31, 2009, the department reported it had obligated $23.2 billion (54 percent) and reported expenditures of $1.8 billion (4 percent). The percentage of Recovery Act funds obligated varied widely across DOE program offices. Several program offices—Energy Efficiency and Renewable Energy, the Energy Information Administration, Environmental Management, and Science—had obligated more than 85 percent of their Recovery Act funds by February 28, 2010, while other program offices—Fossil Energy, the Loan Guarantee Program, and the Western Area Power Administration—had obligated less than a third of their Recovery Act funds by that time. The percentage of Recovery Act funds spent also varied across DOE program offices, though to a lesser degree than the percentage obligated. None of the program offices reported expenditures of more than a third of their Recovery Act funds as of February 28, 2010. The percentage of funds spent ranged from a high of 31 percent for Departmental Administration to a low of zero percent for the Electricity Delivery and Energy Reliability, Energy Information Administration, and Fossil Energy offices. Officials from DOE and states that received Recovery Act funding from DOE cited certain federal requirements and other factors that had affected their ability to implement some Recovery Act projects. In particular, DOE officials reported that Davis-Bacon requirements and the National Environmental Policy Act affected the timing of some project selection and starts, while state officials reported that the National Historic Preservation Act affected their ability to select and start Recovery Act projects. Other factors unrelated to federal requirements—including the newness of programs, staff capacity, and state and local issues—also affected the timing of some projects, according to federal and state officials. Officials from DOE and states that received DOE funding cited certain federal requirements that had affected their ability to select or start some Recovery Act projects. For example: Davis-Bacon requirements. DOE’s Weatherization Assistance Program became subject to the Davis-Bacon requirements for the first time under the Recovery Act after having been previously exempt from those requirements. Thus, the Department of Labor (Labor) had to determine the prevailing wage rates for weatherization workers in each county in the United States. In July 2009, DOE and Labor issued a joint memorandum to Weatherization Assistance Program grantees authorizing them to begin weatherizing homes using Recovery Act funds, provided they paid construction workers at least Labor’s wage rates for residential construction, or an appropriate alternative category, and compensated workers for any differences if Labor established a higher local prevailing wage rate for weatherization activities. On September 3, 2009, Labor completed its determinations; later that month, we reported that Davis- Bacon requirements were a reason why some states had not started weatherizing homes. Specifically, we reported that 7 out of 16 states and the District of Columbia decided to wait to begin weatherizing homes until Labor had determined county-by-county prevailing wage rates for their state. Officials in these states explained that they wanted to avoid having to pay back wages to weatherization workers who started working before the prevailing wage rates were known. In general, the states we reviewed used only a small percentage of their available funds in 2009, mostly because state and local agencies needed time to develop the infrastructures required for managing the significant increase in weatherization funding and for ensuring compliance with Recovery Act requirements, including Davis- Bacon requirements. According to available DOE data, as of December 31, 2009, 30,252 homes had been weatherized with Recovery Act funds, or about 5 percent of the approximately 593,000 total homes that DOE originally planned to weatherize using Recovery Act funds. National Environmental Policy Act (NEPA). DOE officials told us that while NEPA is unlikely to impose a greater burden on Recovery Act projects than on similar projects receiving federal funds, the timing of certain projects may be slowed by these requirements. However, DOE officials reported that the agency had taken steps to expedite the NEPA review process and said that the agency’s funding opportunity announcements specified that projects must be sufficiently developed to meet the Recovery Act’s timetable for commitment of funds. Nevertheless, DOE officials also told us that several program offices—including Loan Guarantee, Fossil Energy, Electricity Delivery and Energy Reliability, and the Power Marketing Administrations—will likely have projects that significantly impact the environment and will therefore require environmental assessments or environmental impact statements. DOE officials told us that they plan to concurrently complete NEPA reviews with other aspects of the project selection and start process. State officials in California and Mississippi also told us that NEPA had caused delays in DOE Recovery Act projects. For example, California officials said that the State Energy Commission must submit some of its Recovery Act projects to DOE for NEPA review because they are not covered by DOE’s existing categorical exclusions. State officials said that such reviews can take up to six or more weeks. Both California and Mississippi officials told us that activities that are categorically excluded under NEPA (e.g., road repaving or energy- efficient upgrades to existing buildings) still require clearance before the state can award funds. Staff must spend time filling out forms and supplying information to DOE on projects that may qualify for a categorical exclusion. National Historic Preservation Act (NHPA). State officials told us that NHPA had also affected DOE Recovery Act project selection and starts. Mississippi officials, in particular, cited NHPA’s clearance requirements as one of the biggest potential delays to project selection in energy programs. Many of the city- and county-owned facilities that could benefit from the Energy Efficiency and Conservation Block Grant program could be subject to historic preservation requirements, which mandate that projects must be identified within 180 days of award. In part because of this requirement, the state had to adjust program plans and limit the scope of eligible recipients and projects to avoid historic preservation issues. Likewise, officials from the Michigan Department of Human Services told us that NHPA requires that weatherization projects receiving federal funds undergo a state historic preservation review. According to Michigan officials, this requirement means that the State Historic Preservation Office may review every home over 50 years of age if any work is to be conducted, regardless of whether the home is in a historic district or on a national registry. These officials estimated that 90 percent of the homes scheduled to be weatherized would need a historic review. These reviews are a departure from Michigan’s previous experience; the State Historic Preservation Office had never considered weatherization work to trigger a review. Furthermore, Michigan officials told us that their State Historic Preservation Office’s policy is to review weatherization applications for these homes within 30 days after receiving the application and advise the Michigan Department of Human Services on whether the work can proceed. However, as of October 29, 2009, the State Historic Preservation Office had only two employees, so state officials were concerned that this process could cause a significant delay. To avoid further delays, Michigan officials told us that in November 2009, they signed an agreement with the State Historic Preservation Office that is designed to expedite the review process. They also told us that with the agreement in place, they expect to meet their weatherization goals. Buy American provisions. DOE officials told us that Buy American provisions could cause delays in implementing Recovery Act projects. Officials from other federal agencies said those provisions have affected or may affect their ability to select or start some Recovery Act projects. In some cases, those agencies had to develop guidance for compliance with Buy American provisions, including guidance on issuing waivers to recipients that were unable to comply. For example, according to Environmental Protection Agency officials, developing Buy American guidance was particularly challenging because of the need to establish a waiver process for Recovery Act projects. At the local level, officials from the Chicago Housing Authority (CHA) reported that the only security cameras that are compatible with the existing CHA system and City of Chicago police systems are not made in the United States. CHA worked with the Department of Housing and Urban Development to determine how to seek a waiver for this particular project. Moreover, an industry representative told us that the Buy American provisions could interrupt contractors’ supply chains, requiring them to find alternate suppliers and sometimes change the design of their projects, which could delay project starts. Officials from DOE and states also told us that factors other than federal requirements have affected the timing of project selection or starts. For example: Newness of programs. Because some Recovery Act programs were newly created, in some cases, officials needed time to establish procedures and provide guidance before implementing projects. In particular, the DOE Inspector General noted that the awards process for the Energy Efficiency and Conservation Block Grant program, newly funded under the Recovery Act, was challenging to implement because there was no existing infrastructure. Hence, Recovery Act funds were not awarded and distributed to recipients in a timely manner. Staff capacity. Officials from DOE stated that they would need to hire a total of 550 staff—both permanent and temporary—to carry out Recovery Act-related work. However, several issues affected DOE’s ability to staff these federal positions, including the temporary nature and funding of the Recovery Act and limited resources for financial management and oversight. To address those issues, DOE was granted a special direct hire authority as part of the Recovery Act for certain areas and program offices. The authority allowed DOE to expedite the hiring process for various energy efficiency, renewable energy, electricity delivery, and energy reliability programs and helped DOE fill longer term temporary (more than 1 year, but not more than 4 years) and permanent positions. However, according to DOE officials, government-wide temporary appointment authority does not qualify an employee for health benefits, and thus few candidates have been attracted to these temporary positions. According to DOE officials, the Office of Management and Budget recently approved direct-hire authority for DOE, which officials believe will alleviate issues related to health care benefits. Some state officials told us that they experienced heavy workloads as a result of the Recovery Act, which impaired their ability to implement programs. As we reported in December 2009, smaller localities, which are often rural, told us that they faced challenges because of a lack of staff to understand, apply for, and comply with requirements for federal Recovery Act grants. For example, some local government officials reported that they did not employ a staff person to handle grants and therefore did not have the capacity to understand which grants they were eligible for and how to apply for them. In the District of Columbia, Department of the Environment officials explained that weatherization funds had not been spent as quickly as anticipated because officials needed to develop the infrastructure to administer the program. For example, the department needed to hire six new staff members to oversee and manage the program. Officials reported that, as of late January 2010, the department had still not hired any of the six new staff required. Officials from the National Association of Counties said that some localities had turned down Recovery Act funding to avoid the administrative burdens associated with the act’s numerous reporting requirements. State, Local, or Tribal Issues. In our recently issued report on factors affecting the implementation of Recovery Act projects, we noted that the economic recession affected some states’ budgets, which, in turn, affected states’ ability to use some Recovery Act funds. For example, according to a recent report by DOE’s Office of Inspector General, implementation of the Weatherization Assistance Program’s Recovery Act efforts was delayed in part by state hiring freezes, problems resolving local budget shortfalls, and state-wide furloughs. State-level budget challenges have affected the implementation of other Recovery Act projects. For example, officials from the Department of Defense told us that because states were experiencing difficulties in passing their current-year budgets, some were unable to provide matching funds for certain Army National Guard programs. As a result, the Department of Defense had to revise its Recovery Act project plan to cancel or reduce the number of Army National Guard projects with state matching funds and replace them with other projects that did not require matching funds. Officials from the Department of Housing and Urban Development also told us that project starts in some instances were affected by the need for state and local governments to furlough employees as a result of the economic downturn. In a report issued yesterday, we discussed recipient reporting in DOE’s Weatherization Assistance Program. Specifically, we noted that reporting about impacts to energy savings and jobs created and retained at both the state and local agency level is still somewhat limited. Although many local officials that we interviewed for that review have collected data about new hires, none could provide us with data on energy savings. Some states told us they plan to use performance measures developed by DOE, while others have developed their own measures. For example, Florida officials told us they plan to measure energy savings by tracking kilowatts used before and after weatherization, primarily with informatio from utility companies. In addition, local agencies in some states either n collect or plan to collect information about other aspects of program operations. For example, local agencies in both California and Michigan collect data about customer satisfaction. In addition, a local agency in California plans to report about obstacles, while an agency in New York will track and report the number of units on the waiting list. As we reported, DOE made several outreach efforts to their program recipients to ensure timely reporting. These efforts included e-mail reminders for registration and Webinars that provided guidance on reporting requirements. For the first round of reporting, DOE developed a quality assurance plan to ensure all prime recipients filed quarterly reports, while assisting in identifying errors in reports. The methodology for the quality assurance review included several phases and provided details on the role and responsibilities for DOE officials. According to DOE officials, the data quality assurance plan was also designed to emphasize the avoidance of material omissions and significant reporting errors. In addition to our reviews of states’ and localities’ use of Recovery Act funds, GAO is also conducting ongoing work on several DOE efforts that received Recovery Act funding, including the Loan Guarantee Program and the Office of Environmental Management’s activities. As I noted earlier, Congress made nearly $4 billion in Recovery Act funding available to DOE to support what the agency has estimated will be about $32 billion in new loan guarantees under its innovative technology loan guarantee program. However, we reported in July 2008 that DOE was not well positioned to manage the loan guarantee program effectively and maintain accountability because it had not completed a number of key management and internal control activities. To improve the implementation of the loan guarantee program and to help mitigate risk to the federal government and American taxpayers, we recommended that, among other things, DOE complete internal loan selection policies and procedures that lay out roles and responsibilities and criteria and requirements for conducting and documenting analyses and decision making, and develop and define performance measures and metrics to monitor and evaluate program efficiency, effectiveness, and outcomes. We are currently engaged in ongoing work to determine the current state of the Loan Guarantee Program and what progress DOE has made since our last report, and we expect to report on that work this summer. Ongoing work also focuses on DOE’s Office of Environmental Management, which also received Recovery Act funding. The Office of Environmental Management oversees cleanup efforts related to decades of nuclear weapons production. The Recovery Act provided DOE with $6 billion—in addition to annual appropriations of $6 billion—for cleanup activities including packaging and disposing of wastes, decontaminating and decommissioning facilities, and removing contamination from soil. DOE has begun work on the majority of its more than 85 Recovery Act projects at 17 sites in 12 states and has spent nearly $1.4 billion (about 23 percent of its total Recovery Act funding) on these projects. We are currently conducting work to evaluate the implementation of these projects, including the number of jobs that have been created and retained, performance metrics being used to measure progress, DOE’s oversight of the work, and any challenges that DOE may be facing. We expect to report on that work this summer. Mr. Chairman, this completes my prepared statement. We will continue to monitor DOE’s use of Recovery Act funds and implementation of programs. I would be happy to respond to any questions you or other Members of the Committee may have at this time. For further information regarding this testimony, please contact me or Mark Gaffigan, Director, at (202) 512-3841. Kim Gianopoulos (Assistant Director), Amanda Krause, Jonathan Kucskar, David Marroni, Alise Nacson, and Alison O’Neill made key contributions to this testimony. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The American Recovery and Reinvestment Act of 2009 (Recovery Act)--initially estimated to cost $787 billion in spending and tax provisions--aims to promote economic recovery, make investments, and minimize or avoid reductions in state and local government services. The Recovery Act provided the Department of Energy (DOE) more than $43.2 billion, including $36.7 billion for projects and activities and $6.5 billion in borrowing authority, in areas such as energy efficiency and renewable energy, nuclear waste clean-up, and electric grid modernization. This testimony discusses (1) the extent to which DOE has obligated and spent its Recovery Act funds, and (2) the factors that have affected DOE's ability to select and start Recovery Act projects. In addition, GAO includes information on ongoing work related to DOE Recovery Act programs. This testimony is based on prior work and updated with data from DOE. As of February 28, 2010, DOE reported it had obligated $25.7 billion (70 percent) and reported expenditures of $2.5 billion (7 percent) of the $36.7 billion it received under the Recovery Act for projects and activities. For context, as of December 31, 2009, DOE reported that it had obligated $23.2 billion (54 percent) and reported expenditures of $1.8 billion (4 percent). The percentage of Recovery Act funds obligated varied widely across DOE program offices and ranged from a high of 98 percent in the Energy Information Administration to a low of 1 percent for the Loan Guarantee Program Office. None of DOE's program offices reported expenditures of more than a third of their Recovery Act funds as of February 28, 2010. Officials from DOE and states that received Recovery Act funding from DOE cited certain federal requirements that had affected their ability to implement some Recovery Act projects. For example: (1) Davis Bacon Requirements. Officials reported that Davis-Bacon requirements had affected the start of projects in the Weatherization Assistance Program because the program had previously been exempt from these requirements. (2) National Environmental Policy Act (NEPA). DOE officials told us that NEPA may affect certain projects that are likely to significantly impact the environment, thereby requiring environmental assessments or environmental impact statements. (3) National Historic Preservation Act (NHPA). Officials from the Michigan Department of Human Services told us that about 90 percent of the homes scheduled to be weatherized under the Weatherization Assistance Program would need a historic review. Additionally, DOE and state officials told us that (4) Newness of programs. In some cases, because some Recovery Act programs were newly created, officials needed time to establish procedures and provide guidance before implementing projects. (5) Staff capacity. DOE officials also told us that they experienced challenges in hiring new staff to carry out Recovery Act work. Also, District of Columbia officials told us they needed to hire 6 new staff members to oversee and manage the weatherization program. (6) State, local, or tribal issues. The economic recession affected some states' budgets, which also affected states' ability to use some Recovery Act funds, such as difficulty providing matching funds.
The Federal Reserve Act of 1913 established the Federal Reserve System as the country’s central bank. The Federal Reserve System consists of the Federal Reserve Board located in Washington, D.C.; 12 Reserve Banks, which have 24 branches located throughout the nation; and the Federal Open Market Committee (FOMC), which is responsible for directing open market operations to influence the total amount of money and credit available in the economy. Each Reserve Bank is a federally chartered corporation with a board of directors. The Federal Reserve Act authorizes the Reserve Banks to make discount window loans, execute monetary policy operations at the direction of the FOMC, and examine bank holding companies and member banks under rules and regulations prescribed by the Federal Reserve Board, among other things. The Federal Reserve Board and the Reserve Banks are self-funded entities that deduct their expenses from their revenue and transfer the remaining amount to Treasury. Federal Reserve System revenues transferred to Treasury have increased substantially in recent years, chiefly as a result of interest income earned from the Federal Reserve System’s large-scale emergency programs. To the extent that Reserve Banks suffer losses on emergency loans, these losses would be deducted from the excess earnings transferred to Treasury. Between late 2007 and early 2009, the Federal Reserve Board created more than a dozen new emergency programs to stabilize financial markets and provided financial assistance to avert the failures of a few individual institutions. The Federal Reserve Board authorized most of this emergency assistance under emergency authority contained in section 13(3) of the Federal Reserve Act. Three of the programs covered by this review—the Term Auction Facility, the dollar swap lines with foreign central banks, and the Agency Mortgage-Backed Securities Purchase Program—were authorized under other provisions of the Federal Reserve Act that do not require a determination that emergency conditions exist, although the swap lines and the Agency MBS program did require authorization by the FOMC. In many cases, the decisions by the Federal Reserve Board, the FOMC, and the Reserve Banks about the authorization, initial terms of, or implementation of the Federal Reserve System’s emergency assistance were made over the course of only days or weeks as the Federal Reserve Board sought to act quickly to address rapidly deteriorating market conditions. FRBNY implemented most of these emergency activities under authorization from the Federal Reserve Board. In a few cases, the Federal Reserve Board authorized FRBNY to lend to a limited liability corporation (LLC) to finance the purchase of assets from a single institution. The LLCs created to assist individual institutions were Maiden Lane, Maiden Lane II, and Maiden Lane III. In 2009, FRBNY, at the direction of the FOMC, began large-scale purchases of mortgage-backed securities (MBS) issued by the housing government- sponsored enterprises, Fannie Mae and Freddie Mac, or guaranteed by Ginnie Mae. Purchases of these agency MBS were intended to provide support to the mortgage and housing markets and to foster improved conditions in financial markets more generally. Most of the Federal Reserve Board’s broad-based emergency programs closed on February 1, 2010. Figure 1 provides a timeline for the establishment, modification, and termination of Federal Reserve System emergency programs subject to this review. The Reserve Banks’ and LLCs’ financial statements, which include the emergency programs’ accounts and activities, and their related financial reporting internal controls, are audited annually by an independent auditing firm. In addition, the Federal Reserve System has a number of internal entities that conduct audits and reviews of the Reserve Banks, including the emergency programs. As shown in figure 2, these other audits and reviews were conducted by the Federal Reserve Board’s Division of Reserve Bank Operations and Payment Systems (RBOPS), the Federal Reserve Board’s Office of Inspector General, and individual Reserve Bank’s internal audit function. The independent financial statement audits and other reviews did not identify significant accounting or financial reporting internal control issues concerning the emergency programs. From 2008 through 2010, vendors were paid $659.4 million across 103 contracts to help establish and operate the Reserve Banks’ emergency programs. The 10 largest contracts accounted for 74 percent of the total amount paid to all vendors. FRBNY was responsible for creating and operating all but two emergency programs and assistance and therefore awarded nearly all of the contracts. See table 2 for the total number and value of contracts for the emergency programs and assistance. As shown in table 2, the Reserve Banks relied on vendors more extensively for programs that assisted single institutions than for broad- based emergency programs. The assistance provided to individual institutions was generally secured by existing assets that either belonged to or were purchased from the institution, its subsidiaries, or counterparties. The Reserve Banks did not have sufficient expertise available to evaluate these assets and therefore used vendors to do so. For example, FRBNY used a vendor to evaluate divestiture scenarios associated with the assistance to AIG. It also hired vendors to manage assets held by the Maiden Lanes. For the broad-based emergency programs, FRBNY hired vendors primarily for transaction-based services and collateral monitoring. Under these programs, the Reserve Banks purchased assets or extended loans in accordance with each program’s terms and conditions. Because of this, the services that vendors provided for these programs were focused more on assisting with transaction execution than analyzing and managing securities, as was the case for the single institution assistance. Most of the contracts, including 8 of the 10 highest-value contracts, were awarded noncompetitively, primarily due to exigent circumstances. These contract awards were consistent with FRBNY’s existing acquisition policy, which applied to all services associated with the emergency programs and single-institution assistance. Under FRBNY policy, noncompetitive processes can be used in special circumstances, such as when a service is available from only one vendor or in exigent circumstances. FRBNY cited exigent circumstances for the majority of the noncompetitive contract awards. FRBNY officials said that the success of a program was often dependent on having vendors in place quickly to begin setting up the operating framework for the program. FRBNY’s policy did not provide additional guidance on the use of competition exceptions, such as seeking as much competition as practicable and limiting the duration of noncompetitive contracts to the exigency period. To better ensure that Reserve Banks do not miss opportunities to obtain competition and receive the most favorable terms for services acquired, we recommended that they revise their acquisition policies to provide such guidance. From 2008 through 2010, vendors were paid $659.4 million through a variety of fee structures. For a significant portion of the fees, program recipients reimbursed the Reserve Banks or the fees were paid from program income. The Reserve Banks generally used traditional market conventions when determining fee structures. For example, investment managers were generally paid a percentage of the portfolio value and law firms were generally paid an hourly rate. Fees for these contracts were subject to negotiation between the Reserve Banks and vendors. For some of the large contracts that were awarded noncompetitively, FRBNY offered vendors a series of counterproposals and was able to negotiate lower fees than initially proposed. During the crisis, FRBNY took steps to manage conflicts of interest related to emergency programs for its employees, program vendors, and members of its Board of Directors, but opportunities exist to strengthen its conflicts policies. Historically, FRBNY has managed potential and actual conflicts of interest for its employees primarily through enforcement of its Code of Conduct, which outlines broad principles for ethical behavior and specific restrictions on financial interests and other activities, such as restrictions on employees’ investments in depository institutions and bank holding companies, and incorporates the requirements of a federal criminal statute and its regulations. During the crisis, FRBNY expanded its guidance and monitoring for employee conflicts. However, while the crisis highlighted the potential for Reserve Banks to provide emergency assistance to a broad range of institutions, FRBNY has not yet revised its conflict policies and procedures to more fully reflect potential conflicts that could arise with this expanded role. For example, specific investment restrictions in FRBNY’s Code of Conduct continue to focus on traditional Reserve Bank counterparties—depository institutions or their affiliates and the primary dealers—and have not been expanded to further restrict employees’ financial interests in certain nonbank institutions that have participated in FRBNY emergency programs and could become eligible for future ones, if warranted. Given the magnitude of the assistance and the public’s heightened attention to the appearance of conflicts related to Reserve Banks’ emergency actions, existing policies and procedures for managing employee conflicts may not be sufficient to avoid the appearance of a conflict in all situations. During our review, Federal Reserve Board and FRBNY staff told us that the Federal Reserve System plans to review and update the Reserve Banks’ Codes of Conduct as needed given the Federal Reserve System’s recently expanded role in regulating systemically significant financial institutions. In light of this ongoing effort, we recommended that the Federal Reserve System consider how potential conflicts from emergency lending could inform any changes. FRBNY managed risks related to vendor conflicts of interest primarily through contract protections and oversight of vendor compliance with these contracts, but these efforts have certain limitations. For example, while FRBNY’s Legal Division negotiated contract provisions intended to help ensure that vendors took appropriate steps to mitigate conflicts of interest related to the services they provided for FRBNY, FRBNY lacked written guidance on protections that should be included to help ensure vendors fully identify and remediate conflicts. Rather than requiring written conflict remediation plans that were specific to the services provided for FRBNY, FRBNY generally reviewed and allowed vendors to rely on their existing enterprisewide policies for identifying conflicts. However, in some situations, FRBNY requested additional program- specific controls be developed. Further, FRBNY’s on-site reviews of vendor compliance in some instances occurred as far as 12 months into a contract. In May 2010, FRBNY implemented a new vendor management policy but had not yet finalized more comprehensive guidance on vendor conflict issues. As a result, we recommended that FRBNY finalize this new policy to reduce the risk that vendors may not be required to take steps to fully identify and mitigate all conflicts. Individuals serving on the boards of directors of the Reserve Banks are generally subject to the same conflict-of-interest statute and regulations as federal employees. A number of Reserve Bank directors were affiliated with institutions that borrowed from the emergency programs, but Reserve Bank directors did not participate directly in making decisions about authorizing, setting the terms, or approving a borrower’s participation in the emergency programs. Rather FRBNY’s Board of Directors assisted the Reserve Bank in helping ensure risks were managed through FRBNY’s Audit and Operational Risk Committee. According to the Federal Reserve Board officials, Reserve Banks granted access to borrowing institutions affiliated with Reserve Bank directors only if these institutions satisfied the proper criteria, regardless of potential director-affiliated outreach or whether the institution was affiliated with a director. Our review of the implementation of several program requirements did not find evidence that would indicate a systemic bias towards favoring one or more eligible institutions. The Federal Reserve Board approved key program terms and conditions that served to mitigate risk of losses and delegated responsibility to one or more Reserve Banks for executing each emergency lending program and managing its risk of losses. The Federal Reserve Board’s early broad- based lending programs—Term Auction Facility, Term Securities Lending Facility, and Primary Dealer Credit Facility—required borrowers to pledge collateral in excess of the loan amount as well as other features intended to mitigate risk of losses. The Federal Reserve Board’s broad-based programs launched in late 2008 and early 2009 employed more novel lending structures to provide liquidity support to a broader range of key credit markets. These later broad-based liquidity programs included Asset- Backed Commercial Paper Money Market Mutual Fund Liquidity Facility, Commercial Paper Funding Facility, Money Market Investor Funding Facility, and Term Asset-Backed Securities Loan Facility. These liquidity programs, with the exception of the Term Asset-Backed Securities Loan Facility, did not require overcollateralization. To help mitigate the risk of losses, the Term Asset-Backed Securities Loan Facility, as well as the programs that did not require overcollateralization, accepted only highly- rated assets as collateral. In addition, Commercial Paper Funding Facility, Money Market Investor Funding Facility, and Term Asset-Backed Securities Loan Facility incorporated various security features, such as the accumulation of excess interest and fee income to absorb losses, to provide additional loss protection. Also, for the assistance to specific institutions, the Reserve Banks negotiated loss protections with the institutions and hired vendors to help oversee the portfolios collateralizing loans. For each of the Maiden Lane transactions, FRBNY extended a senior loan to the LLC and this loan was collateralized by the portfolio of assets held by the LLC. JP Morgan Chase & Co. agreed to take a first loss position of $1.15 billion for Maiden Lane and AIG agreed to assume a similar first loss position for Maiden Lanes II and III. As of July 2011, most of the Federal Reserve Board’s emergency loan programs had closed and all of those that had closed had closed without losses. Moreover, currently, the Federal Reserve Board does not project any losses on FRBNY’s outstanding loans to Term Asset-Backed Securities Loan Facility borrowers and the Maiden Lane LLCs. To manage risks posed by the emergency programs, Reserve Banks developed new controls and FRBNY strengthened its risk management practices over time. In particular, FRBNY expanded its risk management function and enhanced its risk reporting and risk analytics capabilities. For example, in summer 2009, FRBNY expanded its risk management capabilities by adding expertise that would come to be organized as two new functions, Structured Products and Risk Analytics. Although FRBNY has improved its ability to monitor and manage risks from emergency lending, opportunities exist for FRBNY and the Federal Reserve System as a whole to strengthen risk management procedures and practices for any future emergency lending. Specifically, neither FRBNY nor the Federal Reserve Board tracked total potential exposures in adverse economic scenarios across all emergency programs. Moreover, the Federal Reserve System’s existing procedures lack specific guidance on how Reserve Banks should exercise discretion to restrict or deny program access for higher-risk borrowers that were otherwise eligible for the Term Auction Facility and emergency programs for primary dealers. To strengthen practices for managing risk of losses in the event of a future crisis, we recommended that the Federal Reserve System document a plan for more comprehensive risk tracking and strengthen procedures to manage program access for higher-risk borrowers. The Federal Reserve Board and the Reserve Banks took steps to promote consistent treatment of eligible program participants and generally offered assistance on the same terms and conditions to eligible institutions in the broad-based emergency programs. However, in a few programs, the Reserve Banks placed restrictions on some participants that presented higher risk but lacked specific guidance to do so. Further, certain Federal Reserve Board decisions to extend credit to certain borrowers were not fully documented. The Federal Reserve Board created each broad-based emergency program to address liquidity strains in a particular credit market and designed program eligibility requirements primarily to target significant participants in these markets. The emergency programs extended loans both directly to institutions facing liquidity strains and through intermediary borrowers. For programs that extended credit directly, the Federal Reserve Board took steps to limit program eligibility to institutions it considered to be generally sound. For example, Term Auction Facility loans were auctioned to depository institutions eligible to borrow from the discount window and expected by their local Reserve Bank to remain primary-credit-eligible during the term the Term Auction Facility loan would be outstanding. For programs that provided loans to intermediary borrowers, the Federal Reserve Board based eligibility requirements in part on the ability of borrowing institutions, as a group, to channel sufficient liquidity support to eligible sellers. For example, eligible Term Asset-Backed Securities Loan Facility borrowers included a broad range of institutions ranging from depository institutions to U.S. organized investment funds. Federal Reserve Board officials told us that broad participation in Term Asset-Backed Securities Loan Facility was intended to facilitate the program goal of encouraging the flow of credit to consumers and small businesses. The Federal Reserve Board promoted consistent treatment of eligible participants in its emergency programs by generally offering assistance on the same terms and conditions to all eligible participants. For example, institutions that met the announced eligibility requirements for a particular emergency program generally could borrow at the same interest rate, against the same types of collateral, and where relevant, with the same schedule of haircuts applied to their collateral. As previously discussed, for a few programs, FRBNY’s procedures did not have specific guidance to help ensure that restrictions were applied consistently to higher-risk borrowers. Moreover, the Federal Reserve Board could not readily provide documentation of all Term Auction Facility restrictions placed on individual institutions. By having written procedures to guide decision- making for restrictions and suggestions for documentation of the rationale for such decisions, the Federal Reserve Board may be able to better review such decisions and help ensure that future implementation of emergency lending programs will result in consistent treatment of higher- risk borrowers. Our review of Federal Reserve System data for selected programs found that incorrect application of certain program requirements was generally infrequent and that cases of incorrect application of criteria did not appear to indicate intentional preferential treatment of one or more program participants. The Federal Reserve Board did not fully document the basis for its decisions to extend credit on terms similar to those available at PDCF to certain broker-dealer affiliates of four of the primary dealers. In September and November of 2008, the Federal Reserve Board invoked section 13(3) of the Federal Reserve Act to authorize FRBNY to extend credit to the London-based broker-dealer subsidiaries of Merrill Lynch, Goldman Sachs, Morgan Stanley, and Citigroup, as well as the U.S. broker-dealer subsidiaries of Merrill Lynch, Goldman Sachs, and Morgan Stanley. Federal Reserve Board officials told us that the Federal Reserve Board did not consider the extension of credit to these subsidiaries to be a legal extension of PDCF but separate actions to specifically assist these four primary dealers by using PDCF as an operational tool. Federal Reserve Board officials told us that the Federal Reserve Board did not draft detailed memoranda to document the rationale for all uses of section 13(3) authority but that unusual and exigent circumstances existed in each of these cases as critical funding markets were in crisis. However, without more complete documentation, how assistance to these broker- dealer subsidiaries satisfied the statutory requirements for using this authority remains unclear. Moreover, without more complete public disclosure of the basis for these actions, these decisions may not be subject to an appropriate level of transparency and accountability. The Dodd-Frank Act includes new requirements for the Federal Reserve Board to report to Congress on any loan or financial assistance authorized under section 13(3), including the justification for the exercise of authority; the identity of the recipient; the date, amount, and form of the assistance; and the material terms of the assistance. To address these new reporting requirements, we recommended that the Federal Reserve Board set forth its process for documenting its rationale for emergency authorizations. In authorizing the Reserve Banks to operate its emergency programs, the Federal Reserve Board has not provided documented guidance on the types of program policy decisions—including allowing atypical uses of broad-based assistance—that should be reviewed by the Federal Reserve Board. Standards for internal control for federal government agencies provide that transactions and other significant events should be authorized and executed only by persons acting within the scope of their authority. Outside of the established protocols for the discount window, FRBNY staff said that the Federal Reserve Board generally did not provide written guidance on expectations for types of decisions or events requiring formal Federal Reserve Board review, although program decisions that deviated from policy set by the Federal Reserve Board were generally understood to require Board staff consultation. In 2009, FRBNY allowed an AIG-sponsored entity to continue to issue to the Commercial Paper Funding Facility, even though a change in program terms by the Federal Reserve Board likely would have made it ineligible. FRBNY staff said they consulted the Federal Reserve Board regarding this situation, but did not document this consultation and did not have any formal guidance as to whether such continued use required approval by the Federal Reserve Board. To better ensure an appropriate level of transparency and accountability for decisions to extend or restrict access to emergency assistance, we recommended that the Federal Reserve Board document its guidance to Reserve Banks on program decisions that require consultation with the Federal Reserve Board. To assess whether program use was consistent with the Federal Reserve Board’s announced policy objectives, we analyzed program transaction data to identify significant trends in borrowers’ use of the programs. Our analysis showed that large global institutions were among the largest users of several programs. U.S. branches and agencies of foreign banks and U.S. subsidiaries of foreign institutions received over half of the total dollar amount of Commercial Paper Funding Facility and Term Auction Facility loans (see fig. 3). According to Federal Reserve Board staff, they designed program terms and conditions to discourage use that would have been inconsistent with program policy objectives. Program terms—such as the interest charged and haircuts applied—generally were designed to be favorable only for institutions facing liquidity strains. Use of the programs generally peaked during the height of the financial crisis and fell as market conditions recovered (see fig. 4). Within and across the programs, certain participants used the programs more frequently and were slower to exit than others. Reserve Bank officials noted that market conditions and the speed with which the participant recovered affected use of the program by individual institutions. As a result of its monitoring of program usage, the Federal Reserve Board modified terms and conditions of several programs to reinforce policy objectives and program goals. During the financial crisis that began in the summer of 2007, the Federal Reserve System took unprecedented steps to stabilize financial markets and support the liquidity needs of failing institutions that it considered to be systemically significant. To varying degrees, these emergency actions involved the Reserve Banks in activities that went beyond their traditional responsibilities. Over time, FRBNY and the other Reserve Banks took steps to improve program management and oversight for these emergency actions, in many cases in response to recommendations made by their external auditor, Reserve Bank internal audit functions, or the Federal Reserve Board’s RBOPS. However, the Reserve Banks have not yet fully incorporated some lessons learned from the crisis into their policies for managing use of vendors, risk of losses from emergency lending, and conflicts of interest. Such enhanced policies could offer additional insights to guide future Federal Reserve System action, should it ever be warranted. We made seven recommendations to the Chairman of the Federal Reserve Board to further strengthen Federal Reserve System policies for selecting vendors, ensuring the transparency and consistency of decision making involving implementation of any future emergency programs, and managing risks related to these programs. In its comments on our report, the Federal Reserve Board agreed to give our recommendations serious attention and to strongly consider how to respond to them. Mr. Chairman, Ranking Member Clay, and Members of the Subcommittee, this completes my prepared statement. I am prepared to respond to any questions you or other Members of the Subcommittee may have at this time. If you or your staff have any questions about this testimony, please contact me at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made major contributions to this statement include Karen Tremba (Assistant Director), Tania Calhoun, and John Fisher. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Dodd-Frank Wall Street Reform and Consumer Protection Act directed GAO to conduct a one-time audit of the emergency loan programs and other assistance authorized by the Board of Governors of the Federal Reserve System (Federal Reserve Board) during the recent financial crisis. This testimony summarizes the results of GAO's July 2011 report (GAO-11-696) examining the emergency actions taken by the Federal Reserve Board from December 1, 2007, through July 21, 2010. For these actions, where relevant, this statement addresses (1) accounting and financial reporting internal controls; (2) the use, selection, and payment of vendors; (3) management of conflicts of interest; (4) policies in place to secure loan repayment; and (5) the treatment of program participants. To meet these objectives, GAO reviewed program documentation, analyzed program data, and interviewed officials from the Federal Reserve Board and Reserve Banks (Federal Reserve System). On numerous occasions in 2008 and 2009, the Federal Reserve Board invoked emergency authority under the Federal Reserve Act of 1913 to authorize new broad-based programs and financial assistance to individual institutions to stabilize financial markets. Loans outstanding for the emergency programs peaked at more than $1 trillion in late 2008. The Federal Reserve Board directed the Federal Reserve Bank of New York (FRBNY) to implement most of these emergency actions. In a few cases, the Federal Reserve Board authorized a Reserve Bank to lend to a limited liability corporation (LLC) to finance the purchase of assets from a single institution. In 2009 and 2010, FRBNY also executed large-scale purchases of agency mortgage-backed securities to support the housing market. The Reserve Banks, primarily FRBNY, awarded 103 contracts worth $659.4 million from 2008 through 2010 to help carry out their emergency activities. A few contracts accounted for most of the spending on vendor services. For a significant portion of the fees, program recipients reimbursed the Reserve Banks or the fees were paid from program income. The Reserve Banks relied more extensively on vendors for programs that assisted a single institution than for broad-based programs. Most of the contracts, including 8 of the 10 highest-value contracts, were awarded noncompetitively, primarily due to exigent circumstances. These contract awards were consistent with FRBNY's acquisition policies, but the policies could be improved by providing additional guidance on the use of competition exceptions, such as seeking as much competition as practicable and limiting the duration of noncompetitive contracts to the exigency period. FRBNY took steps to manage conflicts of interest for its employees, directors, and program vendors, but opportunities exist to strengthen its conflict policies. In particular, FRBNY expanded its guidance and monitoring for employee conflicts, but new roles assumed by FRBNY and its employees during the crisis gave rise to potential conflicts that were not specifically addressed in the Code of Conduct or other FRBNY policies. As the Federal Reserve System considers revising its conflict policies given its new authority to regulate certain nonbank institutions, GAO recommended it consider how potential conflicts from emergency lending could inform any changes. FRBNY managed vendor conflict issues through contract protections and actions to help ensure compliance with relevant contract provisions, but these efforts had limitations. While the Federal Reserve System took steps to mitigate risk of losses on its emergency loans, opportunities exist to strengthen risk management practices for future crisis lending. The Federal Reserve Board approved program terms and conditions designed to mitigate risk of losses and one or more Reserve Banks were responsible for managing such risk for each program. Reserve Banks required borrowers under several programs to post collateral in excess of the loan amount. For programs that did not have this requirement, Reserve Banks required borrowers to pledge assets with high credit ratings as collateral. For loans to specific institutions, Reserve Banks negotiated loss protections with the private sector and hired vendors to help oversee the portfolios that collateralized loans. While the Federal Reserve System took steps to promote consistent treatment of eligible program participants, it did not always document processes and decisions related to restricting access for some institutions. GAO made seven recommendations to the Federal Reserve Board to strengthen policies for managing noncompetitive vendor selections, conflicts of interest, risks related to emergency lending, and documentation of emergency program decisions. The Federal Reserve Board agreed that GAO's recommendations would benefit its response to future crises and agreed to strongly consider how best to respond to them.
The Department of Homeland Security Appropriations Act for Fiscal Year 2007 states that “none of the funds appropriated…shall be obligated for full scale procurement of monitors until the Secretary of Homeland Security has certified…that a significant increase in operational effectiveness will be achieved.” DNDO noted that certification would meet DHS guidelines for the review and approval of complex acquisitions. Specifically, DNDO stated that the Secretary’s decision would be made in the context of DHS “Key Decision Point 3,” which details the review and approval necessary for DHS acquisition programs to move from the “Capability Development and Demonstration” phase to the “Production and Deployment Phase.” To meet the statutory requirement to certify the ASPs will provide a “significant increase in operational effectiveness,” and requirements outlined in DHS Management Directive 1400, DNDO, with input from subject matter experts, developed a series of tests intended to demonstrate, among other things, ASP performance and deployment readiness. The tests were conducted at several venues, including the Nevada Test Site, the New York Container Terminal, the Pacific Northwest National Laboratory, and five ports of entry. DNDO stated that its request for full-scale production approval would be based upon completed and documented results of these tests. To meet the Secretary’s goal of deploying 225 ASPs by the end of calendar year 2008, Secretarial Certification was scheduled for June 26, 2007. To guide the test operations, DNDO defined a set of Critical Operational Issues that outlined the tests’ technical objectives and provided the baseline to measure demonstrated effectiveness. The purpose of the Critical Operational Issue 1 is to “verify operational effectiveness” of ASPs and determine whether “ASP systems significantly increase operational effectiveness relative to the current generation detection and identification system.” DNDO conducted a series of tests at the Nevada Test Site, the single focus of which, according to DNDO, was to resolve Critical Operational Issue 1. According to DNDO, these tests began in February 2007 and concluded in March 2007. DNDO’s Nevada Test Site test plan, dated January 12, 2007, identified three primary test objectives comparing the operational effectiveness of the ASP systems with existing detection and identification systems at current high-volume operational thresholds. Specifically, DNDO sought to determine the ASPs’ probability to (1) detect and identify nuclear and radiological threats (2) discriminate threat and non-threat radionuclides in primary , and (3) detect and identify threat radionuclides in the presence of non-threat radionuclides. The Nevada Test Site test plan had two key components. First, DNDO developed guidelines for basic test operations and procedures, including test goals and expectations, test tasks and requirements, and roles and responsibilities of personnel involved in the testing, including the ASP contractors. The second component involved the National Institute of Standards and Technology developing test protocols that defined, among other things, how many times a container carrying test materials would need to be driven through portal monitors in order to obtain statistically relevant results. DNDO’s tests at the Nevada Test Site were designed to compare the current system—using PVTs in primary inspections and a PVT and RIID combination in secondary inspections—to other configurations including PVTs in primary and ASPs in secondary, and ASPs in both primary and secondary inspection positions. DNDO tested three ASPs and four PVTs. The ASP vendors included Thermo, Raytheon, and Canberra. The PVT vendors included SAIC, TSA, and Ludlum. According to the test plan, to the greatest extent possible, PVT, ASP, and RIID handheld devices would be operated consistent with approved CBP standard operating procedures. Prior to “formal” collection of the data that would be used to support the resolution of Critical Operational Issue 1, DNDO conducted a series of tests it referred to as “dry runs” and “dress rehearsals.” The purpose of the dry runs was to, among other things, verify ASP systems’ software performance against representative test materials and allow test teams and system contractors to identify and implement software and hardware improvements to ASP systems. The purpose of the dress rehearsals was to observe the ASPs in operation against representative test scenarios and allow the test team to, among other things: develop confidence in the reliability of the ASP system so that operators and data analysts would know what to expect and what data to collect during the formal test, collect sample test data, and determine what errors were likely to occur in the data collection process and eliminate opportunities for error. In addition to improving ASP performance through dry runs and dress rehearsals conducted prior to formal data collection, ASP contractors were also significantly involved in the Nevada Test Site test processes. Specifically, the test plan stated that “ contractor involvement was an integral part of the NTS test events to ensure the systems performed as designed for the duration of the test.” Furthermore, ASP contractors were available on site to repair their system at the request of the test director and to provide quality control support of the test data through real time monitoring of available data. DNDO stated that Pacific Northwest National Laboratory representatives were also on site to provide the same services for the PVT systems. DNDO conducted its formal tests in two phases. The first, called Phase 1, was designed to support resolution of Critical Operational Issue 1 with high statistical confidence. DNDO told us on multiple occasions and in a written response that only data collected during Phase 1 would be included in the final report presented to the Secretary to request ASP certification. According to DNDO, the second, called Phase 3, provided data for algorithm development which targeted specific and known areas in need of work and data to aid in the development of secondary screening operations and procedures. According to DNDO documentation, Phase 3 testing was not in support of the full-scale production decision. Further, DNDO stated that Phase 3 testing consisted of relatively small sample sizes since the data would not support estimating the probability of detection with a high confidence level. On May 30, 2007, following the formal tests and the scoring of their results, DNDO told GAO that it had conducted additional tests that DNDO termed “Special Testing.” The details of these tests were not outlined in the Nevada Test Site test plan. On June 20, 2007, DNDO provided GAO with a test plan document entitled “ASP Special Testing” which described the test sources used to conduct the tests but did not say when the tests took place. According to DNDO, special testing was conducted throughout the formal Phase 1 testing process and included 12 combinations of threat, masking, and shielding materials that differed from “dry run,” “dress rehearsal,” and formal tests. DNDO also stated that the tests were “blind,” meaning that neither DNDO testing officials nor the ASP vendors knew what sources would be included in the tests. According to DNDO, these special tests were recommended by subject matter experts outside the ASP program to address the limitations of the original NTS test plan, including available time and funding resources, special nuclear material sources, and the number of test configurations that could be incorporated in the test plan, including source isotope and activity, shielding materials and thicknesses, masking materials, vehicle types, and measurement conditions. Unlike the formal tests, National Institute of Standards and Technology officials were not involved in determining the number of test runs necessary to obtain statistically relevant results for the special tests. Based on our analysis of DNDO’s test plan, the test results, and discussions with experts from four national laboratories, we are concerned that DNDO used biased test methods that enhanced the performance of the ASPs. In the dry runs and dress rehearsals, DNDO conducted many preliminary runs of radiological, nuclear, masking, and shielding materials so that ASP contractors could collect data on the radiation being emitted, and modify their software accordingly. Specifically, we are concerned because almost all of the materials, and most combinations of materials, DNDO used in the formal tests were identical to those that the ASP contractors had specifically set their ASPs to identify during the dry runs and dress rehearsals. It is highly unlikely that such favorable circumstances would present themselves under real world conditions. A key component of the NTS tests was to test the ASPs’ ability to detect and identify dangerous materials, specifically when that material was masked or “hidden” by benign radioactive materials. Based on our analysis, the masking materials DNDO used at NTS did not sufficiently test the performance limits of the ASPs. DOE national laboratory officials raised similar concerns to DNDO after reviewing a draft of the test plan in November 2006. These officials stated that the masking materials DNDO planned to use in its tests did not emit enough radiation to mask the presence of nuclear materials in a shipping container and noted that many of the materials that DOE program officials regularly observe passing through international ports emit significantly higher levels of radiation than the masking materials DNDO used for its tests. DNDO officials told us that the masking materials used at the Nevada Test Site represented the average emissions seen in the stream of commerce at the New York Container Terminal. However, according to data accumulated as part of DOE’s program to secure international ports (the Megaports program), a significant percentage of cargo passing through one European port potentially on its way to the United States has emission levels greater than the average radiation level for cargo that typically sets off radiation detection alarms. Importantly, DNDO officials told us that the masking materials used at the Nevada Test Site were not intended to provide insight into the limits of ASP detection capabilities. Yet, DNDO’s own test plan for “ASP Special Testing” states, “The DNDO ASP NTS Test Plan was designed to… measure capabilities and limitations in current ASP systems.” In addition, the NTS tests did not objectively test the ASPs against the currently deployed radiation detection system. DNDO’s test plan stated that, to the greatest extent possible, PVT, ASP, and RIID handheld devices would be operated consistent with approved CBP standard operating procedures. However, after analyzing test results and procedures used at the Nevada Test Site, CBP officials determined that DNDO had, in fact, not followed a key CBP procedure. In particular, if a threat is identified during a secondary screening, or if the result of the RIID screening isn’t definitive, CBP procedures require officers to send the data to CBP’s Laboratories and Scientific Services for further guidance. DNDO did not include this critical step in its formal tests. CBP officials also expressed concern with DNDO’s preliminary test results when we met with them in May 2007. In regards to the special tests DNDO conducted, based on what DNDO has told us and our own evaluation of the special test plan, we note that because DNDO did not consult NIST on the design of the blind tests, we do not know the statistical significance of the results, and the tests were not entirely blind because some of the nuclear materials used in the blind tests were also used to calibrate the ASPs on a daily basis. During the course of our work, CBP, DOE, and national laboratory officials we spoke to voiced concern about their lack of involvement in the planning and execution of the Nevada Test Site tests. We raised our concerns about this issue and those of DOE and CBP to DNDO’s attention on multiple occasions. In response to these concerns, specifically those posed by DOE, DNDO convened a conference on June 27, 2007, of technical experts to discuss the Nevada test results and the methods DNDO used to test the effects of masking materials on what the ASPs are able to detect. As a result of discussions held during that meeting, subject matter experts agreed that computer-simulated injection studies could help determine the ASPs’ ability to detect threats in the presence of highly radioactive masking material. According to a Pacific Northwest National Laboratory report submitted to DNDO in December 2006, injection studies are particularly useful for measuring the relative performance of algorithms, but their results should not be construed as a measure of (system) vulnerability. To assess the limits of portal monitors’ capabilities, the Pacific Northwest National Laboratory report states that actual testing should be conducted using threat objects immersed in containers with various masking agents, shielding, and cargo. DNDO officials stated at the meeting that further testing could be scheduled, if necessary, to fully satisfy DOE concerns. On July 20, 2007, DHS Secretary Chertoff notified certain members of the Congress that he planned to convene an independent expert panel to review DNDO’s test procedures, test results, associated technology assessments, and cost-benefit analyses to support the final decision to deploy ASPs. In making this announcement, Secretary Chertoff noted the national importance of developing highly effective radiation detection and identification capabilities as one of the main reasons for seeking an independent review of DNDO’s actions. On August 30, 2007, the DHS Undersecretary for Management recommended that the Secretary of Homeland Security delay Secretarial Certification of ASPs for an additional two months. According to DHS, the current delay is in order to provide CBP more time to field ASP systems, a concern CBP had raised early in our review. Effectively detecting and identifying radiological or nuclear threats at U.S. borders and ports of entry is a vital matter of national security, and developing new and advanced technology is critical to U.S. efforts to prevent a potential attack. However, it is also critical to fully understand the strengths and weaknesses of any next generation radiation detection technology before it is deployed in the field and to know, to the greatest extent possible, when or how that equipment may fail. In our view, the tests conducted by DNDO at the Nevada Test Site between February and March 2007 used biased test methods and were not an objective assessment of the ASPs’ performance capabilities. We believe that DNDO’s test methods—specifically, conducting dry runs and dress rehearsals with contractors prior to formal testing—enhanced the performance of the ASPs beyond what they are likely to achieve in actual use. Furthermore, the tests were not a rigorous evaluation of the ASPs’ capabilities, but rather a developmental demonstration of ASP performance under controlled conditions which did not test the limitations of the ASP systems. As a result of DNDO’s test methods and the limits of the tests—including a need to meet a secretarial certification deadline and the limited configurations of special nuclear material sources, masking, and shielding materials used—we believe that the results of the tests conducted at the Nevada Test Site do not demonstrate a “significant increase in operational effectiveness” relative to the current detection system, and cannot be relied upon to make a full-scale production decision. We recommend that the Secretary of Homeland Security take the following actions: Delay Secretarial Certification and full-scale production decisions of the ASPs until all relevant tests and studies have been completed and limitations to these tests and studies have been identified and addressed. Furthermore, results of these tests and studies should be validated and made fully transparent to DOE, CBP, and other relevant parties. Once the tests and studies have been completed, evaluated, and validated, DHS should determine in cooperation with CBP, DOE, and other stakeholders including independent reviewers, if additional testing is needed. If additional testing is needed, the Secretary should appoint an independent group within DHS, not aligned with the ASP acquisition process, to conduct objective, comprehensive, and transparent testing that realistically demonstrates the capabilities and limitations of the ASP system. This independent group would be separate from the recently appointed independent review panel. Finally, the results of the tests and analyses should be reported to the appropriate congressional committees before large scale purchases of ASP’s are made. Mr. Chairman, this concludes our prepared statement. We would be happy to respond to any questions you or other members of the subcommittee may have. For further information about this testimony, please contact me, Gene Aloise, at (202) 512-3841 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Erika D. Carter, Alison O’Neill, Jim Shafer, Daren Sweeney, and Eugene Wisnoski made key contributions to this statement. Combating Nuclear Smuggling: DHS’s Decision to Procure and Deploy the Next Generation of Radiation Detection Equipment Is Not Supported by Its Cost-Benefit Analysis. GAO-07-581T. Washington, D.C.: March.14, 2007. Nuclear Nonproliferation: Focusing on the highest Priority Radiological Sources Could Improve DOE’s Efforts to Secure Sources in Foreign Countries. GAO-07-580T. Washington, D.C.: March. 13, 2007. Combating Nuclear Smuggling: DNDO Has Not Yet Collected Most of the National Laboratories’ Test Results on Radiation Portal Monitors in Support of DNDO’s Testing and Development Program. GAO-07-347R. Washington, D.C.: March 9, 2007. Technology Assessment: Securing the Transport of Cargo Containers. GAO-06-68SU. Washington, D.C.: January 25, 2006. Combating Nuclear Smuggling: DHS’s Cost-Benefit Analysis to Support the Purchase of New Radiation Detection Portal Monitors Was Not Based on Available Performance Data and Did Not Fully Evaluate All the Monitors’ Costs and Benefits. GAO-07-133R. Washington, D.C.: October 17, 2006. Combating Nuclear Terrorism: Federal Efforts to Respond to Nuclear and Radiological Threats and to Protect Emergency Response Capabilities Could Be Strengthened. GAO-06-1015. Washington, D.C.: September 21, 2006. Border Security: Investigators Transported Radioactive Sources Across Our Nation’s Borders at Two Locations. GAO-06-940T. Washington, D.C.: July 7, 2006. Combating Nuclear Smuggling: Challenges Facing U.S. Efforts to Deploy Radiation Detection Equipment in Other Countries and in the United States. GAO-06-558T. Washington, D.C.: March 28, 2006. Combating Nuclear Smuggling: DHS Has Made Progress Deploying Radiation Detection Equipment at U.S. Ports-of-Entry, but Concerns Remain. GAO-06-389. Washington, D.C.: March 22, 2006. Combating Nuclear Smuggling: Corruption, Maintenance, and Coordination Problems Challenge U.S. Efforts to Provide Radiation Detection Equipment to Other Countries. GAO-06-311. Washington, D.C.: March 14, 2006. Combating Nuclear Smuggling: Efforts to Deploy Radiation Detection Equipment in the United States and in Other Countries. GAO-05-840T. Washington, D.C.: June 21, 2005. Preventing Nuclear Smuggling: DOE Has Made Limited Progress in Installing Radiation Detection Equipment at Highest Priority Foreign Seaports. GAO-05-375. Washington, D.C.: March 31, 2005. Homeland Security: DHS Needs a Strategy to Use DOE’s Laboratories for Research on Nuclear, Biological, and Chemical Detection and Response Technologies. GAO-04-653. Washington, D.C.: May 24, 2004. Homeland Security: Summary of Challenges Faced in Targeting Oceangoing Cargo Containers for Inspection. GAO-04-557T. Washington, D.C.: March 31, 2004). Homeland Security: Preliminary Observations on Efforts to Target Security Inspections of Cargo Containers. GAO-04-325T. Washington, D.C.: December 16, 2003. Homeland Security: Radiation Detection Equipment at U.S. Ports of Entry. GAO-03-1153TNI. Washington, D.C.: September 30, 2003. Homeland Security: Limited Progress in Deploying Radiation Detection Equipment at U.S. Ports of Entry. GAO-03-963. Washington, D.C.: September 4, 2003). Container Security: Current Efforts to Detect Nuclear Materials, New Initiatives, and Challenges. GAO-03-297T. Washington, D.C.: November 18, 2002. Customs Service: Acquisition and Deployment of Radiation Detection Equipment. GAO-03-235T. Washington, D.C.: October 17, 2002. Nuclear Nonproliferation: U.S. Efforts to Combat Nuclear Smuggling. GAO-02-989T. Washington, D.C.: July 30, 2002. Nuclear Nonproliferation: U.S. Efforts to Help Other Countries Combat Nuclear Smuggling Need Strengthened Coordination and Planning. GAO-02-426. Washington, D.C.: May 16, 2002. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Department of Homeland Security's (DHS) Domestic Nuclear Detection Office (DNDO) is responsible for addressing the threat of nuclear smuggling. Radiation detection portal monitors are key elements in our national defenses against such threats. DHS has sponsored testing to develop new monitors, known as advanced spectroscopic portal (ASP) monitors. In March 2006, GAO recommended that DNDO conduct a cost-benefit analysis to determine whether the new portal monitors were worth the additional cost. In June 2006, DNDO issued its analysis. In October 2006, GAO concluded that DNDO did not provide a sound analytical basis for its decision to purchase and deploy ASP technology and recommended further testing of ASPs. DNDO conducted this ASP testing at the Nevada Test Site (NTS) between February and March 2007. GAO's statement addresses the test methods DNDO used to demonstrate the performance capabilities of the ASPs and whether the NTS test results should be relied upon to make a full-scale production decision. Based on our analysis of DNDO's test plan, the test results, and discussions with experts from four national laboratories, we are concerned that DNDO's tests were not an objective and rigorous assessment of the ASPs' capabilities. Our concerns with the DNDO's test methods include the following: (1) DNDO used biased test methods that enhanced the performance of the ASPs. Specifically, DNDO conducted numerous preliminary runs of almost all of the materials, and combinations of materials, that were used in the formal tests and then allowed ASP contractors to collect test data and adjust their systems to identify these materials. It is highly unlikely that such favorable circumstances would present themselves under real world conditions. (2) DNDO's NTS tests were not designed to test the limitations of the ASPs' detection capabilities--a critical oversight in DNDO's original test plan. DNDO did not use a sufficient amount of the type of materials that would mask or hide dangerous sources and that ASPs would likely encounter at ports of entry. DOE and national laboratory officials raised these concerns to DNDO in November 2006. However, DNDO officials rejected their suggestion of including additional and more challenging masking materials because, according to DNDO, there would not be sufficient time to obtain them based on the deadline imposed by obtaining Secretarial Certification by June 26. 2007. By not collaborating with DOE until late in the test planning process, DNDO missed an important opportunity to procure a broader, more representative set of well-vetted and characterized masking materials. (3) DNDO did not objectively test the performance of handheld detectors because they did not use a critical CBP standard operating procedure that is fundamental to this equipment's performance in the field. Because of concerns raised that DNDO did not sufficiently test the limitations of ASPs, DNDO is attempting to compensate for weaknesses in the original test plan by conducting additional studies--essentially computer simulations. While DNDO, CBP, and DOE have now reached an agreement to wait and see whether the results of these studies will provide useful data regarding the ASPs' capabilities, in our view and those of other experts, computer simulations are not as good as actual testing with nuclear and masking materials.
When a business hires an employee, the business generally becomes responsible for collecting and paying three federal taxes—the personal income tax (withholding), FICA, and FUTA. It also becomes liable for state and local employment taxes: in most states, these include a state income tax and a state unemployment tax. For businesses, each tax presents, in turn, its own set of rules and regulations with its own particular exceptions and unique regulatory requirements. For the small business owner just starting up, these employment tax rules make compliance with the taxes both complex and confusing. Many apparent inconsistencies among the various tax code provisions can be explained, to some degree, by reference to an actual purpose of the individual tax. Broadly speaking, employment taxes can be broken into two large groups—those whose primary purpose is to raise general revenues (e.g., the federal income tax) and those that provide social welfare insurance (e.g., FICA and FUTA). Accomplishing the different goals of the various taxes and the policy trade-offs made in their design requires different regulatory schemes. For example, in the interest of fairness and to reflect the ability of different individuals to pay, the federal income tax applies progressive rates to employee wages, taxing higher wages more than lower wages and exempting some lower wage earners from taxation. FUTA, on the other hand, ensures that employers contribute to state unemployment funds by taxing employers at a flat rate for all wages paid to employees (up to $7,000 per employee), but reducing the tax owed by amounts paid to state unemployment insurance funds (down to a federal tax rate of 0.8 percent). adverse affect on the state budget, it can choose to reject the provision as part of its state income tax code. The number and type of state and local tax assessments also vary. In New York City, for instance, an area with large amounts of commercial activity, a business may face as many as eight federal, state, and local employment taxes. Today we have brought along a chart to help illustrate the complexity of current employment taxes. Appendix III of this report is a copy of this chart. The chart is divided into two main parts: the left half of the chart covers federal taxes, and the other half covers state and local taxes. Along the bottom of the chart we list the different types of employment taxes, and in the middle of the chart we present the four major decision points an employer must come to before making actual tax payments. For state taxes, we have used as our examples those applied in Nebraska and Ohio. Aside from the fact that these are the home states of the Chairmen of this Commission, these states make a useful comparison for our purposes. Both states piggyback on federal income taxes. However, Nebraska has a primarily rural economy based mainly on agriculture and livestock. Ohio, on the other hand, has a more urban economy that includes over five times the number of businesses as Nebraska. Ohio law provides for more extensive business regulation than Nebraska—for example, three additional local employment taxes: city and village income taxes, school district tax, and workers’ compensation payroll tax. employee’s paycheck and pay it to the federal government, along with a matching amount imposed on the employer. 3. Federal Unemployment Tax (required by the Federal Unemployment Tax Act [FUTA]): FUTA imposes a tax on most employers. This tax, in conjunction with state unemployment taxes, supplies the funds to provide benefits for unemployed persons under the state law. The tax is imposed solely on the employer and is not deducted from the employee’s wages. In complying with federal, state, and local employment-related taxes, the business person must answer four questions: Is the worker an “employee” covered by the tax? Are the compensation payments to the employee “wages”? What is the employer’s employment tax liability? What are the deposit and filing requirements? Our chart provides detail on these issues for federal taxes and provides general information on the application of these issues to state and local taxes. We will discuss each issue in turn, with examples of application on hypothetical small businesses. Once a business decides to hire a worker, the first issue to be considered is whether the worker is an employee for the purpose of each different employment tax. Major factors affecting this issue for federal taxes are outlined in our chart in the lower left corner. The pivotal question on this point is whether the worker is an employee or an “independent contractor.” The standard “common law” test finds the worker to be an employee if the employer controls both what work is done and how it is performed. The Internal Revenue Service (IRS) augments this test with guidelines on the factors that can affect the final determination. and exceptions where businesses hiring employees not meeting the common law test are responsible for either FICA and FUTA, or only FICA. In effect, the first set of exceptions shifts the burden for tax compliance from the employer to the employee, while the second set puts the burden on the employer. These exceptions to the general rules can affect various types of workers: for example, ministers, news vendors under age 18, certain family members, and homeworkers in a cottage industry. Depending on conditions (as stated specifically by statute), these workers may be exempt from income tax withholding, FICA, FUTA, or some combination of the three taxes. As an example, consider a jeweler, operating from her basement as a small manufacturing sole proprietor. Pressed by the coming holiday season, the jeweler would like to hire a neighbor to make small metal pieces, working in his own home with his own tools using material furnished by the jeweler. Even though this person—termed a “homeworker” in the federal tax code—will most likely not be considered an a common law employee, the jeweler will still find herself liable for FICA taxes, both deducted from the homeworker’s salary and matched by her business, if she pays the neighbor more than $100 in cash. Under the federal tax law, however, she will not be liable for FUTA taxes. Having determined that the worker is an employee covered by employment taxes, the next issue confronted by the employer is what compensation payments are taxable as wages. Compensation to an employee may take many forms—pension plans, health and life insurance plans, travel and business expenses, educational assistance, to list a few examples—as well as straight cash hourly wages. Arguably, the most difficult aspect of this issue is determining whether the compensation paid to the employee fits the category of nontaxable compensation. Certain employee benefits, such as pension plan contributions, health and life insurance, commuting passes, and educational assistance, can all be taxable or nontaxable compensation, depending upon whether such benefits are paid out and administered in compliance with complex tax regulations. Compliance with such regulations requires the employer to pay meticulous attention to detailed legal provisions. Because of the exceptions and preferences in the code, how an employee is compensated can affect the tax liability of both the employer and employee. For example, suppose the owner of a beauty salon hired a part-time hairstylist, a person who is also a full-time undergraduate student at a local college. To keep bookkeeping simple, the new employer would most likely pay the hairstylist a cash hourly wage. However, she might also consider including “educational assistance” as compensation to her employee as an offset to a higher hourly rate. Because a recent law (P.L. 104-188) reinstated a tax break for employer-provided educational assistance, the employee may be eligible for annual tax-free educational assistance up to $5,250. As the tax-free educational assistance payments are not subject to FICA or FUTA, the payments would reduce the salon owner’s overall payroll costs, as well as reduce the employee’s federal income tax liability. Including the educational assistance would, however, also complicate the employer’s recordkeeping. Concluding that the worker is an employee with compensation payments subject to employment taxes, the employer next must calculate his or her periodic tax liability. For the federal income tax, wages are withheld for each payroll period, and the amount withheld is based on the amount of wages and number of allowances claimed by the employee on his or her federal Form W-4. For FICA, the employer is to deduct 7.65 percent of the employee’s wages (for wages up to $62,700; for wages over that amount, the employer is to deduct 1.45 percent) for the same payroll period and pay over the same amount as the business’ matching share. FUTA is paid by the employer at a rate of 6.2 percent, but it can be reduced to as low as 0.8 percent with credit for payments to state unemployment tax. Similar calculations must be made for state tax liabilities. All these taxes are calculated independently of one another. For example, suppose two partners in a small gift shop in Lincoln, Nebraska, hire a part-time bookkeeper to work 10 hours a week at $10 an hour. The bookkeeper is paid $200 in cash twice each month, is single, and reports only 1 exemption on his Form W-4. When the partners consult the federal tax semimonthly withholding tables, they will find that they do not owe any withholding of federal income taxes for their employee. However, they will still owe payments for FICA, FUTA, Nebraska state income tax, and Nebraska state unemployment tax. although they pay the bookkeeper less than $1,500 per quarter, they still owe a flat percentage of 6.2 percent because the bookkeeper works once a week for over 20 weeks per year. However, as they will also be liable for 3.5 percent in Nebraska unemployment tax (as new employers), ultimately their federal FUTA liability will be reduced by the amount of state payments. As for state income tax, the partners look to Nebraska withholding tables—this shows a tax liability of $2.38 plus 3.65 percent of the excess wages over $179, for a total of $3.15 for each semimonthly pay period. Finally, to remit the employment taxes owed, the employer must figure out the deposit and filing requirements for each employment tax. Generally, employers must remit taxes at regular intervals, as the year progresses. They must also file statements on the amounts of taxes deposited either annually or quarterly, depending on the tax. When the deposit and filing requirements for federal taxes are combined with those for state tax assessments, these requirements can become quite complicated. Consider, for instance, the requirements applicable to a hypothetical construction company located in Cleveland, Ohio, doing most of its work in the Cleveland area, with several of its six employees residing in local counties where there are school district taxes. To fully comply with all federal and local requirements, the small business owner must make at least 56 tax deposits (if the company does business in other Ohio cities, the owner might have to make more deposits), using five different federal, state, and local forms. These tax deposits cover the collection and payment of seven different employment taxes. In addition to these tax deposits, the business must also file the federal Form 941 quarterly, the federal Form 940 annually, the Ohio Form IT-941 annually; send federal Form W-2 to each of his employees; and file federal Forms W-3 and W-2 with both the Social Security Administration (SSA) and the state of Ohio. We set out the schedule of deposit and filing requirements for this hypothetical Ohio company in appendix I. In summary, Messrs. Chairmen, hiring employees or even a single employee is a critical decision for businesses in terms of their tax liabilities and the complexities of the tax administration process they face. With laws and regulations so complicated, it is not surprising that working out feasible solutions to reduce complexity has been difficult, at best. Attempts to simplify provisions, or to make different tax code provisions consistent with each other, inevitably involve trade-offs and compromises in the administration of the tax programs. For instance, to consider eliminating a statutory exception in an unemployment tax to ensure consistency between that tax and, say, the federal income tax, one would need to weigh the trade-offs between the economic and political rationale for the particular exception and the need for simplification of the tax system. Moreover, legislative change by itself—even to simplify provisions—can add to the uncertainty of the regulations, leaving business owners unable to rely on long-term operating procedures. Since 1988, various federal and state groups have been trying to simplify aspects of the employment taxes. The current federal working group, STAWRS (Simplified Tax and Wage Reporting System), is operating under a memorandum of understanding among the Department of the Treasury, IRS, SSA, and the Department of Labor. STAWRS is addressing the employer burden through three broad categories of initiatives: (1) Streamlined Customer Service, (2) Single-Point Filing, and (3) Simplified Requirements. We discuss several of these initiatives today, and we include a list of all initiatives in appendix II. The first simplification project involved the processing of federal Wage and Tax Statements, Form W-2s. All states currently accept Form W-2 as a record of the wage payments paid to employees; however, the employer generally must send the Form W-2s to both the state and SSA. Until this project, which aims at reducing burden by showing the feasibility of requiring the employer to send Form W-2s only to SSA, SSA received both the federal and state W-2 data, but did nothing with the state data. Under the current STAWRS demonstration project, SSA scans both federal and state data onto computer tapes, transmitting the state data to participating states through IRS. Thirty-four states are participating in this project. Three states—Oklahoma, Maine, and Oregon—have dropped the requirement for Form W-2 state filing altogether. then send only one quarterly form to the state, which, in turn, would forward the federal information to IRS. Montana has recently become a partner with STAWRS on a similar project. A third initiative is attempting to reconcile and simplify the numerous federal and state definitions of terms such as employee and wages into one harmonized wage code. The STAWRS group researched the federal and state laws to identify hundreds of differences in how the various tax codes defined their operative terms. For example, the Maryland tax code excepts yacht salesman for its definition of employee, Ohio excepts part-time orchestra members; only one exception—ministers—is found in every code. Recently, STAWRS developed a Harmonized Wage Code Blueprint, which was completed in September 1996, but it does not expect to finalize any parts of this work until 1998. Even though these initiatives are under way, the difficulty involved in making choices given the context of the political, economic, and administrative issues that must be considered continues to slow their progress. With the Form W-2 initiative, for example, one question is: Who pays the extra costs when SSA scans and delivers data to the states? With the combined quarterly form, privacy issues involving the receipt and transfer of tax data between the federal and state government must be resolved, as well as administrative issues such as how taxpayers alert the government to business address changes. Political problems abound with the concept of a harmonized wage code among all states and the federal government. For example, as we noted earlier, even among states that routinely piggyback on federal tax law, there are political and economic reasons why states will not accept federal changes to tax law. In summary, we believe that employment taxes present an instructive example not only as to the complexity of the current tax code but also as to the difficulties and potential pitfalls presented by simplification endeavors. Even the smallest change to the current very complicated regulatory scheme can involve political and economic trade-offs between types of taxes and between federal and state jurisdictions. Notwithstanding the enormity of the challenge, however, we believe that efforts to simplify the tax code are essential to reducing compliance burden, thereby making voluntary tax compliance easier for all types of businesses, large and small. Messrs. Chairmen, Members of the Commission, this ends our prepared statement. We would be pleased to answer any questions. Table I.1 shows the 1995 federal and state tax deposit and filing requirements for a hypothetical business located in Cleveland, Ohio. The business was started December 1, 1994, and has six employees, some of whom reside in Ohio school districts with an income tax assessment. Prior quarter amount Form 8109 State unemployment Prior quarter amount Form UCO-2QR Form W-3, W-2s to SSA Form IT-3, W-2s to state (continued) (continued) (continued) —Form 940, Employer’s Annual Federal Unemployment (FUTA) Tax Return; —Form 941, Employer’s Quarterly Tax Return; —Form 8109, Federal Tax Deposit Coupon; —Form W-2, Wage and Tax Statement; —Form W-3, Transmittal of Wage and Tax Statements. The state forms include: —IT-3, Transmittal of Wage and Tax Statements; —IT-501, Ohio’s Employer’s Payment of Income Tax Withheld; —IT-941, Ohio’s Employer’s Annual Reconciliation of Income Tax Withheld; —UCO-2QR, Employer’s Contribution and Wage Report. The local forms include: —CCA-102, Municipal Depository Receipt; —CCA-W-3, Reconciliation of City Income Tax Withheld and Transmittal of Wage Statements; —SD-101, Employer’s Payment of School District Tax Withheld. At the current time, the STAWRS Project Office is working on nine initiatives to ease the compliance burden on employers dealing with employment taxes. Table II.1 describes these initiatives and their present status. Has developed “Employer Assistance Kit” for use on new World Wide Web site; includes procedures for employers to apply for Employer Identification Number (EIN) on Internet. Needs STAWRS Executive Steering Board approval to set up Web site with the State of Illinois. Has designed procedures whereby employers can check electronically with SSA on the validity of an employee’s Social Security number. However, originally designed for small personal computer system; owing to statutory language in the Welfare Reform Act, SSA may need to use larger computer system. Recently completed limited pilot project in which employers electronically sent data for quarterly Form 941 simultaneously to IRS and a state using standardized format. Three states involved—California, Minnesota, and Texas. Phase I demonstrated the ability of SSA to receive Form W-2s electronically with use of a “Value-Adding Network” (an intermediary computer “mailbox”). In Phase II, SSA has identified 1,000 employers to use electronic personal identification numbers (PINs) to electronically transmit Forms W-3 and W-2. Current proposal being developed by the Federation of Tax Administrators to have SSA capture all state data on Form W-2 and place data on magnetic media for distribution to participating states. Would eliminate dual W-2 filing for employers. STAWRS working with the state of Oregon to add the federal Form 941 to the state’s already combined report the federal Form 941. In August 1996, developed combined form. The state of Montana has recently become a partner with the potential to add aspects of the Harmonized Wage Code. (continued) SSA currently putting state W-2 data on computer tape for use by states. Thirty-four states are now participating; 3 states have eliminated their requirement for employers to file Form W-2s with the state in anticipation of adoption of the concept. Has completed research of existing federal and state statutes and regulations. In September 1996, completed a “Harmonized Wage Code Blueprint.” Has completed research of existing federal and state statutes and regulations. Has identified common filing and payment dates and developed matrix of existing filing and payment dates. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed the impact of various employment tax laws and regulations on small businesses hiring their first employees and all employees thereafter. GAO noted that: (1) employment tax compliance can be particularly burdensome to employers because of multiple federal, state, and local taxes; (2) each tax generally requires it own unique set of rules, regulations, and exceptions, which makes compliance difficult for employers; (3) the complexities discussed reflect the various trade-offs that have been made to address assorted tax policy issues; (4) these trade-offs include considerations as to the type of tax imposed, the types of compensation to be socially encouraged, and the fiscal requirements of individual governmental units and, consequently, they will not be easy to simplify; (5) respondents to an earlier GAO survey described characteristics of especially troublesome tax provisions, such as ambiguity, frequent changes, expiration clauses, and layers of federal and state regulation; and (6) efforts to simplify the tax code are essential to reducing compliance burden, thus making voluntary tax compliance easier for all types of businesses, large and small.
Under a demonstration project established by VBIA, from February 8, 2005, through September 30, 2007, and subsequently extended through November 16, 2007, OSC and DOL share responsibility for receiving and investigating USERRA claims and seeking corrective action for federal employees. While the legislation did not establish specific goals for the demonstration project, the language mandating that GAO conduct a review suggested that duplication of effort and delays in processing cases were of concern to Congress. The demonstration project gave OSC, an independent investigative and prosecutorial agency, authority to receive and investigate claims for federal employees whose social security numbers end in odd numbers. VETS investigated claims for individuals whose social security numbers end in even numbers. Under the demonstration project, OSC conducts an investigation of claims assigned to determine whether the evidence is sufficient to resolve the claimants’ USERRA allegations and, if so, seeks voluntary corrective action from the involved agency or initiates legal action against the agency before the Merit Systems Protection Board (MSPB). For claims assigned to DOL, VETS conducts an investigation, and if it cannot resolve a claim, DOL is to inform claimants that they may request to have their claims referred to OSC. OSC’s responsibility under USERRA for conducting independent reviews of referred claims after they are investigated but not resolved by VETS remained unchanged during the demonstration project. Before sending the referred claim to OSC, two additional levels of review take place within DOL. After OSC receives the referred claim from DOL, it reviews the case file, and if satisfied that the evidence is sufficient to resolve the claimant’s allegations and that the claimant is entitled to corrective action, OSC begins negotiations with the claimant’s federal executive branch employer. According to OSC, if an agreement for full relief via voluntary settlement by the employer cannot be reached, OSC may represent the servicemember before MSPB. If MSPB rules against the servicemember, OSC may appeal the decision to the U.S. Court of Appeals for the Federal Circuit. In instances where OSC finds that referred claims do not have merit, it informs servicemembers of its decision not to represent them and that they have the right to take their claims to MSPB without OSC representation. Figure 1 depicts USERRA claims’ processing under the demonstration project. Under the demonstration project, VETS and OSC used two different models to investigate federal employee USERRA claims. Both DOL and OSC officials have said that cooperation and communication increased between the two agencies concerning USERRA claims, raising awareness of the issues related to servicemembers who are federal employees. In addition, technological enhancements have occurred, primarily on the part of VETS since the demonstration project. For example, at VETS, an enhancement to its database enables the electronic transfer of information between agencies and the electronic filing of USERRA claims. However, we found that DOL did not consistently notify claimants concerning the right to have their claims referred to OSC for further investigation or to bring their claims directly to MSPB if DOL did not resolve their claims. We also found data limitations at both agencies that made claim outcome data unreliable. DOL agreed with our findings and recommendations and has begun to take corrective action. Since the start of the demonstration project on February 8, 2005, both DOL/VETS and OSC had policies and procedures for receiving, investigating, and resolving USERRA claims against federal executive branch employers. Table 1 describes the two models we reported DOL and OSC using to process USERRA claims. Once a VETS investigator completes an investigation and arrives at a determination on a claim, the investigator is to contact the claimant, discuss the findings, and send a letter to the claimant notifying him or her of VETS’s determination. When VETS is unsuccessful in resolving servicemembers’ claims, DOL is to notify servicemembers who filed claims against federal executive branch agencies that they may request to have their claims referred to OSC or file directly with MSPB. Our review of a random sample of claims showed that for claims VETS was not successful in resolving (i.e., claims not granted or settled), VETS (1) failed to notify half the claimants in writing, (2) correctly notified some claimants, (3) notified others of only some of their options, and (4) incorrectly advised some claimants of a right applicable only to nonfederal claimants—to have their claims referred to the Department of Justice or to bring their claims directly to federal district court. In addition, we found that the VETS USERRA Operations Manual failed to provide clear guidance to VETS investigators on when to notify servicemembers of their rights and the content of the notifications. VETS had no internal process to routinely review investigators’ determinations before claimants are notified of them. According to a VETS official, there was no requirement that a supervisor review investigators’ determinations before notifying the claimant of the determination. In addition, legal reviews by a DOL regional Office of the Solicitor occurred only when a claimant requested to have his or her claim referred to OSC. A VETS official estimated that about 7 percent of claimants ask for their claims to be referred to OSC or, for nonfederal servicemembers, to the Department of Justice. During our review, citing our preliminary findings, DOL officials required each region to revise its guidance concerning the notification of rights. Since that time, DOL has taken the following additional actions: reviewed and updated policy changes to incorporate into the revised Operations Manual and prepared the first draft of the revised Manual; issued a memo in July 2007 from the Assistant Secretary for Veteran’s Employment and Training to regional administrators, senior investigators, and directors requiring case closing procedure changes, including the use of standard language to help ensure that claimants (federal and nonfederal) are apprised of their rights; and began conducting mandatory training on the requirements contained in the memo in August 2007. In addition, according to DOL officials, beginning in January 2008, all claims are to be reviewed before the closure letter is sent to the claimant. These are positive steps. It is important for DOL to follow through with its plans to complete revisions to its USERRA Operations Manual, which according to DOL officials is expected in January 2008, to ensure that clear and uniform guidance is available to all involved in processing USERRA claims. Our review of data from VETS’s database showed that from the start of the demonstration project on February 8, 2005, through September 30, 2006, VETS investigated a total of 166 unique claims. We reviewed a random sample of case files to assess the reliability of VETS’s data and found that the closed dates in VETS’s database were not sufficiently reliable. Therefore, we could not use the dates for the time VETS spent on investigations in the database to accurately determine DOL’s average processing time. Instead, we used the correct closed dates from the case files in our random sample and statistically estimated the average processing time for VETS’s investigations from the start of the demonstration project through July 21, 2006—the period of our sample. Based on the random sample, there is at least a 95 percent chance that VETS’s average processing time for investigations ranged from 53 to 86 days. During the same period, OSC received 269 claims and took an average of 115 days to process these claims. We found the closed dates in OSC’s case tracking system to be sufficiently reliable. In his July 2007 memo discussed above, the Assistant Secretary for Veteran’s Employment and Training also instructed regional administrators, senior investigators, and directors that investigators are to ensure that the closed date of each USERRA case entered in VETS’s database matches the date on the closing letter sent to the claimant. We found data limitations at both agencies that affected our ability to determine outcomes of the demonstration project and could adversely affect Congress’s ability to assess how well federal USERRA claims are processed and whether changes are needed. At VETS, we found an overstatement in the number of claims and unreliable data in the VETS’s database. From February 8, 2005, through September 30, 2006, VETS received a total of 166 unique claims, although 202 claims were recorded as opened in VETS’s database. Duplicate, reopened, and transferred claims accounted for most of this difference. Also, in our review of a random sample of case files, we found the dates recorded for case closure in VETS’s database did not reflect the dates on the closure letters in 22 of 52 claims reviewed, so using the correct dates from the sample, we statistically estimated average processing time, and the closed code, which VETS uses to describe the outcomes of USERRA claims (i.e., claim granted, claim settled, no merit, withdrawn) was not sufficiently reliable for reporting specific outcomes of claims. At OSC, we assessed the reliability of selected data elements in OSC’s case tracking system in an earlier report and found that the corrective action data element, which would be used for identifying the outcomes of USERRA claims, was not sufficiently reliable. We separately reviewed those claims that VETS investigated but could not resolve and for which claimants requested referral of their claims to OSC. For these claims, two sequential DOL reviews take place: a VETS regional office prepares a report of the investigation, including a recommendation on the merits and a regional Office of the Solicitor conducts a separate legal analysis and makes an independent recommendation on the merits. From February 8, 2005, through September 30, 2006, 11 claimants asked VETS to refer their claims to OSC. Of those 11 claims, 6 claims had been reviewed by both a VETS regional office and a regional Office of the Solicitor and sent to OSC. For those 6 claims, from initial VETS investigation through the VETS regional office and regional Office of the Solicitor reviews, it took an average of 247 days or about 8 months before the Office of the Solicitor sent the claims to OSC. Of the 6 referred claims that OSC received from DOL during the demonstration project, as of September 30, 2006, OSC declined to represent the claimant in 5 claims and was still reviewing 1 of them, taking an average of 61 days to independently review the claims and determine if the claims had merit and whether to represent the claimants. You asked us about factors that could be considered in deciding whether to extend the demonstration project and to conduct a follow-up review. If the demonstration project were to be extended, it would be important to have clear objectives. Legislation creating the current demonstration project was not specific in terms of the objectives to be achieved. Having clear objectives would be important for the effective implementation of the extended demonstration project and would facilitate a follow-on evaluation. In this regard, our report provides baseline data that could inform this evaluation. Given adequate time and resources, an evaluation of the extended demonstration project could be designed and tailored to provide information to inform congressional decision making. Congress also may want to consider some potential benefits and limitations associated with options available if the demonstration is not extended. Table 2 presents two potential actions that could be taken and examples of potential benefits and limitations of each. The table does not include steps, such as enabling legislation that might be associated with implementing a particular course of action. At a time when the nation’s attention is focused on those who serve our country, it is important that employment and reemployment rights are protected for federal servicemembers who leave their employment to perform military or other uniformed service. Addressing the deficiencies that we identified during our review, including correcting inaccurate and unreliable data, is a key step to ensuring that servicemembers’ rights under USERRA are protected. While DOL is taking positive actions in this regard, it is important that these efforts are carried through to completion. Chairman Akaka, Senator Burr, and Members of the Committee, this concludes my prepared statement. I would be pleased to respond to any questions that you may have. For further information regarding this statement, please contact George Stalcup, Director, Strategic Issues, at (202) 512-9490 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Individuals making key contributions to this statement included Belva Martin, Assistant Director; Karin Fangman; Tamara F. Stenzel; Kiki Theodoropoulos; and Greg Wilmoth. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Uniformed Services Employment and Reemployment Rights Act of 1994 (USERRA) protects the employment and reemployment rights of federal and nonfederal employees who leave their employment to perform military or other uniformed service. Under a demonstration project from February 8, 2005, through September 30, 2007, and subsequently extended through November 16, 2007, the Department of Labor (DOL) and the Office of Special Counsel (OSC) share responsibility for receiving and investigating USERRA claims and seeking corrective action for federal employees. In July 2007, GAO reported on its review of the operation of the demonstration project through September 2006. This testimony describes the findings of our work and actions taken to address our recommendations. In response to the request from Congress, GAO also presents views on (1) factors to consider in deciding whether to extend the demonstration project and the merits of conducting a follow-up review and (2) options available if the demonstration is not extended. In preparing this statement, GAO interviewed officials from DOL and OSC to update actions taken on recommendations from our July 2007 report and developments since we conducted that review. Under the demonstration project, OSC receives and investigates claims for federal employees whose social security numbers end in odd numbers; DOL investigates claims for individuals whose social security numbers end in even numbers. Among GAO's findings were the following: DOL and OSC use two different models to investigate federal USERRA claims, with DOL using a nationwide network and OSC using a centralized approach, mainly within its headquarters. Since the demonstration project began, both DOL and OSC officials have said that cooperation and communication increased between the two agencies concerning USERRA claims, raising awareness of the issues related to servicemembers who are federal employees. DOL did not consistently notify claimants concerning the right to have their claims referred to OSC for further investigation or to bring their claims directly to the Merit Systems Protection Board if DOL did not resolve their claims. DOL had no internal process to routinely review investigators' determinations before claimants were notified of them. Data limitations at both agencies made claim outcome data unreliable. DOL officials agreed with GAO's findings and recommendations and are taking actions to address the recommendations. In July 2007, DOL issued guidance concerning case closing procedures, including standard language to ensure that claimants (federal and nonfederal) are apprised of their rights,and began conducting mandatory training on the guidance in August 2007. In addition, according to DOL officials, beginning in January 2008, all claims are to be reviewed before the closure letter is sent to the claimant. These are positive steps and it will be important for DOL to follow through with these and other actions. If the demonstration project were to be extended, it would be important that clear objectives be set. Legislation creating the current demonstration project was not specific in terms of the objectives to be achieved. Clear project objectives would also facilitate a follow-on evaluation. In this regard, GAO's July 2007 report provides baseline data that could inform this evaluation. Given adequate time and resources, an evaluation of the extended demonstration project could be designed and tailored to provide information to inform congressional decision making. GAO also presents potential benefits and limitations associated with options available if the demonstration project is not extended.
To assist workers who are laid off as a result of international trade, the Trade Expansion Act of 1962 created the Trade Adjustment Assistance program. Historically, the main benefits available through the program have been extended income support and training. Participants are generally entitled to income support, but the amount of funds available for training is limited by statute. For fiscal year 2004, about $1.1 billion was appropriated for income support and about $269 million for training and other benefits. Labor certifies groups of laid-off workers as potentially eligible for TAA benefits and services by investigating petitions that are filed on the workers’ behalf. Workers are eligible for TAA if they were laid off as a result of international trade and were involved in the production of an article; workers served by the TAA program have generally been laid off from the manufacturing sector. Congress has amended the TAA program a number of times since its inception. For example, in 1974 Congress eased program eligibility requirements, and in 1988 Congress added a requirement that workers be in training to receive income support. In 1993 Congress created a separate North American Free Trade Agreement Transitional Adjustment Assistance program specifically for workers laid off because of trade with Canada or Mexico. The most recent amendments to the TAA program were included in the TAA Reform Act of 2002 (Pub. L. No. 107-210), which was signed into law in August 2002. The Reform Act consolidated the former TAA and NAFTA- TAA programs into a single TAA program and doubled the amount of funds available for training annually. The act also changed some administrative requirements in an effort to accelerate the process of enrolling workers in the program; increased the maximum number of weeks of income support available, to match the maximum number of weeks of training available; added two new benefits, a Health Coverage Tax Credit and a wage expanded program eligibility to include some secondary workers affected by trade with countries other than Canada and Mexico as well as more workers affected by a shift in production (see table 1). Most of the changes included in the act—including the petition-processing time limit, the training enrollment deadline, and the expanded group eligibility criteria—took effect for petitions filed on or after November 4, 2002. Congress allowed more time for the implementation of the new benefit programs created by the act, giving Labor until August 2003 to implement the wage insurance program and certain components of the Health Coverage Tax Credit. Under the current revised TAA program, eligible participants have access to a wider range of benefits and services than before, including Training. Participants may receive up to 130 weeks of training, including 104 weeks of vocational training and 26 weeks of remedial training (e.g., English as a second language or literacy). Extended income support. Participants may receive up to 104 weeks of extended income support benefits beyond the 26 weeks of UI benefits available in most states. This total includes 78 weeks while participants are completing vocational training and an additional 26 weeks, if necessary, while participants are completing remedial training. The amount of extended income support payments in a state is set by statute at the state’s UI benefit level. During their first 26 weeks of extended income support, participants must either be enrolled in training, have completed training, or have a waiver from this requirement; to qualify for more than 26 weeks of extended income support, participants must be enrolled in training. The TAA statute lists six reasons why a TAA participant may receive a waiver from the training requirement, including that the worker possesses marketable skills or that the approved training program is not immediately available. States must review participants’ waivers at least every 30 days, and if necessary may continue to renew participants’ waivers each month throughout the initial 26 weeks of extended income support. Job search and relocation benefits. Payments are available to help participants search for a job in a different geographical area and to relocate to a different area to take a job. Participants may receive up to a maximum of $1,250 to conduct a job search. The maximum relocation benefit includes 90 percent of the participant’s relocation expenses plus a lump sum payment of up to $1,250. Health Coverage Tax Credit. Eligible participants may receive a tax credit covering 65 percent of their health insurance premiums for certain health insurance plans. To be eligible for the credit, trade-affected workers must either be receiving extended income support payments, or they must be eligible for extended income support but are still receiving UI payments, or they must be recipients of benefits under the new wage insurance program. As a result, trade-affected workers who are still receiving UI rather than extended income support may register for the HCTC only if they are in training, have completed training, or have a waiver from the training requirement. The Internal Revenue Service (IRS) along with other federal agencies administers the tax credit; states are required to regularly submit to the IRS lists of potentially eligible TAA participants. Wage insurance. The wage insurance program—known as the Alternative TAA (ATAA) program—is a demonstration project designed for older workers who forgo training, obtain reemployment within 26 weeks, but take a pay cut. Provided the participant’s annual earnings at his or her new job are $50,000 or less, the benefit provides 50 percent of the difference between the participant’s pre- and postlayoff earnings up to a maximum of $10,000 over 2 years. In order for the workers covered by a petition for TAA assistance to qualify for the benefit, the petition must include a specific request for ATAA eligibility. The petition must stipulate that a significant proportion of the workers covered by the petition are age 50 and older and that the workers lack easily transferable skills. The process of enrolling trade-affected workers in the TAA program begins when a petition for TAA assistance is filed with Labor on behalf of a group of laid-off workers. Petitions may be filed by entities including the employer experiencing the layoff, a group of at least three affected workers, a union, or the state or local workforce agency. The law requires Labor to complete its investigation, and either certify or deny the petition, within 40 days after it has received the petition. Labor investigates whether a petition meets the requirements for TAA certification by taking steps such as contacting company officials, surveying a company’s customers, and examining aggregate industry data. When Labor has certified a petition, it notifies the relevant state, which has responsibility for contacting the workers covered by the petition, informing them of the benefits available to them, and telling them when and where to apply for benefits. The TAA statute lays out certain basic requirements that all certified petitions must meet, including that a significant proportion of workers employed by a company be laid off or threatened with layoff. In addition to meeting these basic requirements, a petition must demonstrate that the layoff is related to international trade in one of several ways. Table 2 summarizes these statutory eligibility requirements for the TAA program. If Labor denies a petition for TAA assistance, the workers who would have been certified under the petition have two options for challenging this denial. They may request an administrative reconsideration of the decision by Labor. To take this step, workers must cite reasons why the denial is erroneous according to the facts, the interpretation of the facts, or the law itself, and must mail their request to Labor within 30 days of the announcement of the denial. Workers may also appeal to the United States Court of International Trade for judicial review of Labor’s denial. Workers must appeal a denial to the U.S. Court of International Trade within 60 days of either the initial denial or a denial following administrative reconsideration by Labor. (See app. II for a summary of final decisions made by the U.S. Court of International Trade since fiscal year 1999 on TAA appeals.) The Workforce Investment Act (WIA) of 1998 encouraged greater coordination between the TAA program and other federal employment and training programs. WIA required the use of a consolidated service delivery structure—called the one-stop center system—and mandated that services for about 17 categories of federal employment and training programs, including TAA, be accessible through this system. These programs must ensure that certain services, such as eligibility determination and assessment, are available through at least one one-stop center in each local area. The WIA dislocated worker program, also a mandated partner in the one- stop delivery system, is the federal government’s primary employment and training program designed for laid-off workers. Funded at almost $1.5 billion in fiscal year 2004, the dislocated worker program includes two components: formula funds that Labor annually distributes to states (about $1.2 billion) and the national reserve (about $300 million). Labor uses part of the national reserve to award national emergency grants to states, based on their requests throughout the year, to help them respond to disasters and major layoffs. Labor also uses part of the national reserve to award national emergency grants specifically to serve trade-affected workers who are also eligible for the TAA program. States report that most trade affected workers are enrolling in services sooner than in prior years because of some of the key provisions of the TAA Reform Act, but the new training enrollment deadline has had unintended consequences for some workers. The new 40-day time limit for Labor to process petitions has enabled workers to receive services more quickly after being laid off. In addition to setting the new petition processing time limit, the act also established a new training enrollment deadline for workers, and states reported to us that most workers are now enrolling in training sooner as a result of this deadline. However, states reported that some workers have been negatively affected by the deadline. For example, some workers may not enroll in the most appropriate training or may miss the deadline and lose extended income support. These problems are heightened in the case of large layoffs, some states reported. Most workers are enrolling in TAA services sooner than in prior years because of two key provisions of the TAA Reform Act, the new petition- processing time limit and the new training enrollment deadline. The Reform Act reduces by one-third, from 60 days to 40 days, the time period in which Labor must review a petition. The purpose of the reduced time frame is to enable workers to receive benefits and services more quickly. In the past, Labor sometimes had difficulty meeting the 60-day time limit for petition processing. But it reduced the average processing time from 107 days in fiscal year 2002, before the new time limit took effect, to 38 days in fiscal year 2003 (see fig. 1). Also, Labor improved the percentage of petitions processed in 40 days or less from 17 percent in fiscal year 2002 to 62 percent in fiscal year 2003 after the act went into effect. According to a Labor official, management changes helped the agency reduce the average petition-processing time. For example, Labor developed a step-by-step timeline for staff, laying out when they must complete specific steps in the petition review process in order to meet the 40-day requirement. In addition, Labor increased the number of petition investigators by adding more contractors. Officials also have plans to reengineer the petition reviews in part to expedite the process. Workers are also enrolling in services sooner because of the new training enrollment deadline. The deadline requires workers to be enrolled in training or have a training waiver by the later of two dates: either 16 weeks after being laid off or 8 weeks after the petition is certified. Workers who fail to meet this deadline become ineligible to receive extended income support benefits. Forty-one of the 50 states surveyed reported that workers are now enrolling in training sooner as a result of this deadline. Most states also reported that the deadline accelerates the processes of determining eligibility and notifying and assessing workers. Prior to the TAA Reform Act, workers were required to be in training or have a training waiver in order to start collecting extended income support benefits after exhausting their UI eligibility—26 weeks in most states. Now, because of the new deadline, workers may be required to either be in training or possess a training waiver while still collecting regular UI benefits. Although the new training enrollment deadline gets most workers into training sooner, it has also had unintended consequences. For example, officials from the majority of states reported that as a result of the training enrollment deadline, some workers might not be enrolling in the most appropriate training because less time is available to assess workers’ training needs. In order to meet the training enrollment deadline, officials may feel pressured to assess workers more quickly. State officials in some of the states we visited told us that some TAA program participants are not able to carefully select training programs because of rushed assessments. Another negative effect of the new time limit is that some workers miss the deadline and lose their eligibility for extended income support. Thirty- six states report that workers at least occasionally miss the deadline and consequently lose their eligibility for extended income support beyond what is available through UI benefits. A local official from North Carolina said that some certified workers in the local area who would like to enter the TAA program miss the deadline, either because they do not come in for TAA enrollment until after the deadline has passed or they come into the one-stop before the deadline but do not leave themselves enough time to enroll in training or obtain a training waiver. For example, this official told us that in the case of a recent layoff of 120 workers, 20 workers did not come into the one-stop until after their deadline had passed. Other officials in North Carolina said that workers who lose their eligibility for extended income support generally are not allowed to enter training, because state and local officials are concerned that with no other source of income, workers will drop out of training. The ability of workers to meet the new training enrollment deadline may be negatively affected by delays in program operations. These delays, as well as delays by workers themselves in registering for TAA services, may contribute to some workers having insufficient time for an assessment of their training needs or missing the training enrollment deadline. One of the program operation delays occurs as a result of the time it takes Labor to notify states about certification decisions. After Labor has certified a petition, it waits several days before informing the state, to give relevant members of Congress advance notification. Twenty-one states reported that the time it takes Labor to notify states about certifications at least occasionally causes workers to miss the deadline. Another delay may occur as a result of the time it takes for states to receive lists of affected workers from companies. After a state receives notification from Labor of a certification, it obtains from the company a list of the workers affected by the certified layoff and sends a letter to these workers informing them of their potential eligibility for TAA. Sometimes companies are unable or unwilling to provide these lists in a timely manner. In these cases, some workers miss the deadline because they don’t receive the notification soon enough or may have insufficient time for an assessment of their training needs. Twenty-seven states reported that the time it takes states to receive the list of affected workers at least occasionally causes workers to miss the deadline. In addition to these program delays, laid-off workers may have insufficient time for assessment or miss their enrollment deadline because of their own delays in seeking assistance. Some state and local officials in the sites we visited told us it often takes time for dislocated workers to process the emotional shock of being laid off and accept the need for assistance, which may cause them to miss the training enrollment deadline. Thirty-seven states reported on our survey that workers’ delays in reporting to one-stop centers for counseling at least occasionally cause them to miss the deadline and lose their eligibility for extended income support. Figure 2 illustrates the program delays, using the timeline of an actual layoff that began in December 2002 in one of the states we visited. In this example, Labor notified the state 6 days after certifying the petition (step 5). Almost another month elapsed before the state received a complete list of affected workers from the company (step 6). As a result, by the time the state mailed notification letters, affected workers had, at most, 3 weeks to register for services and enroll in training or receive a training waiver. The delays described above are heightened in the case of large layoffs, because the volume of workers who need services within a very short time period overwhelms the program’s capacity to provide workers with appropriate assessment. Processing a large number of affected workers quickly may be especially challenging for program administrators in rural areas, which do not have many staff to perform case management. Ten states reported that processing large layoffs often or very often causes workers to miss the training enrollment deadline, and an additional 9 states said processing large layoffs occasionally causes workers to miss the deadline. For example, Texas officials told us that when dealing with very large layoffs, states may need more time to assess and process workers than is allowed by the new training enrollment deadline. Officials in a rural area in Maine that experienced a large trade-related layoff said that it was challenging to get all affected workers to register for training within the deadline. This area hired additional workers to perform outreach to affected workers and encourage them to register for services. In an effort to prevent workers from missing the new deadline and losing eligibility for extended income support, some officials are issuing training waivers to workers who reach their deadlines without having enrolled in a training program. For example, officials in Maine reported that during a large layoff in a rural area, local staff granted mass waivers to workers so they would meet the deadline and preserve their extended income support benefits. According to a Maine official, staff in this rural area could not provide appropriate assessment within the training enrollment deadline to all affected workers, so waivers were necessary to prevent workers from losing eligibility for extended income support. Officials in some states and local areas reported an increased administrative workload associated with issuing more training waivers, primarily to accelerate Health Coverage Tax Credit enrollment, and noted that some other new provisions in the TAA Reform Act were difficult to fully implement. State officials are issuing more training waivers than in the past, in order to ensure that workers are able to access the HCTC after being laid off, and some officials told us that this increase in waivers has caused a significant administrative workload. States also reported that the provision that extends TAA eligibility to secondary workers and the one that provides a wage insurance benefit have been challenging to fully implement. Almost all states reported issuing an increased number of training waivers since the TAA Reform Act took effect. Three states reported in our survey that before the Reform Act took effect they issued training waivers to over 50 percent of TAA-eligible workers. Since the Reform Act took effect, 29 states have issued waivers to over 50 percent of eligible workers, and 15 of these issued waivers to over 75 percent of eligible workers. Labor’s national data indicate that overall states issued over 40 percent more training waivers in fiscal year 2003 than in 2002 (see fig. 3). Most states reported to us that the reason they have issued more training waivers is to ensure that workers are eligible for the HCTC. Thirty-eight states reported on our survey that to a great or very great extent, they have issued more training waivers since the TAA Reform Act took effect in order to allow workers to qualify for the HCTC. To activate eligibility for the HCTC, even while they are still receiving UI benefits, workers must meet the eligibility criteria for extended income support, including the requirement that they must be in training, have completed training or have a training waiver. Officials in all the states we visited told us that many state and local officials are issuing waivers so that workers can quickly become eligible for the HCTC. Officials in two of these states noted that workers need waivers to enroll in the HCTC even before they reach their training enrollment deadline. Furthermore, officials in two other states told us that workers are receiving waivers to allow them to enroll in the HCTC even before these workers exhaust their UI benefits. According to officials in four of the five states we visited, issuing waivers to enable workers to qualify for the HCTC causes a significant administrative workload. The administrative workload associated with issuing training waivers is considerable, in part because training waivers have to be issued individually and must be reviewed monthly. Officials in one state noted that the workload associated with issuing waivers is especially burdensome during a very large layoff, when a large volume of workers must be processed. Furthermore, the increased administrative workload associated with issuing and reviewing training waivers may be compounded for states that choose to issue extensions to workers whose waivers expire before they exhaust their UI benefits. Despite officials’ efforts to ensure that workers are eligible for the HCTC, the actual rate of HCTC participation is difficult to determine because reliable data on the total number of individuals actually eligible for HCTC are not available. For example, according to an October 2003 survey for the IRS, some of those identified as potentially eligible for, but not enrolled in HCTC, were in fact ineligible because they had other coverage, such as Medicare or through a spouse’s employer, that made them ineligible for the tax credit. Although there are no reliable national data on the HCTC participation rate, officials in states we visited told us that workers might not be taking advantage of the HCTC because eligible individuals lack affordable health care insurance options from which to choose. Furthermore, officials in one state also noted that some workers may not take advantage of the HCTC because they cannot afford to pay their entire health care insurance premium while they wait to enroll in the HCTC. States reported having difficulties with the implementation of two other reform provisions—the provision that extends TAA eligibility to an additional category of secondary workers and the new wage insurance provision. The TAA Reform Act extended eligibility to a new category of secondary workers—workers who supply parts to any company directly affected by trade, not just those affected by trade with Canada or Mexico, as was true under the previous NAFTA-TAA program—and the number of secondary workers covered by certified TAA petitions increased somewhat in fiscal year 2003. However, it is unclear whether the number of secondary workers certified after the TAA Reform Act represents a small or large proportion of all secondary workers who are now potentially eligible for the TAA program, particularly because most states reported difficulties in identifying secondary workers and only some have increased their efforts to do so. According to Labor’s data, the estimated number of secondarily affected workers covered by approved TAA petitions increased from about 3,600 workers in fiscal year 2002, before the Reform Act took effect, to about 4,700 workers in fiscal year 2003 (see fig. 4). Secondary workers have also increased as a proportion of all TAA- certified workers, from about 1 percent in fiscal year 2002 to about 2 percent in fiscal year 2003 (see fig. 5). However, the total number of secondary workers who are potentially eligible for the TAA program under the new eligibility guidelines is not known. As a result, it is unclear what proportion of secondary workers potentially eligible for services have been certified under the Reform Act. States reported facing challenges in identifying secondary workers. More than half of all states reported having at least some difficulty identifying secondarily affected workers. States reported using a range of methods to identify secondary workers eligible for the TAA program. For example, according to our survey, states are most likely to identify secondary workers by asking trade-affected employers for lists of their suppliers or finishers or by asking employers if their layoff was as a result of losing business from other firms that may have been trade affected. However, officials in most of the states we visited told us that some trade-affected employers are reluctant or find it difficult to provide the names of suppliers that may also be affected by their shutdown or reduced production. For example, officials in North Carolina told us that employers are sometimes hesitant to share this information because they do not want their suppliers to know that they are having financial difficulties. Also, officials in Maine told us that smaller employers may find it difficult to provide information on their suppliers or finishers because they do not have this information readily available. In addition, some trade-affected employers may no longer be in operation or may be difficult to contact. None of the state officials we talked with had developed procedures to identify workers in other states who are secondarily affected by layoffs in their own states—so workers in one state who are secondarily affected by a trade-related layoff in another state might never learn they may qualify for TAA services. Labor has also not developed a strategy to assist states in identifying workers who are secondarily affected by a layoff in a different state. More states are making significant efforts to identify secondary workers now than in the past, but this number remains relatively small. While only 5 states reported on our survey that they sought to identify eligible secondary workers to a great extent prior to the TAA Reform Act, 13 states reported that since the Reform Act took effect, they have sought to identify secondary workers to a great extent. Officials in all of the states we visited told us either that workers have expressed an interest in or they expect workers to be interested in the new Alternative TAA program—a 5-year demonstration project providing a wage insurance subsidy to older workers who find reemployment quickly but at a lower wage. Most states also reported having difficulty implementing this new program. Thirty-eight states reported that they had at least some difficulty implementing the new wage insurance provision. One of the most commonly reported problems was the difficulty of developing new payment systems for issuing workers’ monthly checks. For example, an official in one state we visited told us that the state’s existing UI payment system, which is used to issue payments to wage insurance beneficiaries, could not be readily modified to issue payments to wage insurance beneficiaries. Furthermore, an official from another state told us the state’s current UI payment system program prohibits it from issuing checks to individuals identified in the system as employed. As a result, the state uses an off-line payment system to issue wage insurance checks. States also reported that a lack of guidance from Labor on this new provision hampered their efforts to implement it. Labor did not provide states with formal guidance on how to implement the provision until August 6, 2003, the same day that workers were first able to apply for the wage insurance program. In addition, some officials and employers found the wage insurance eligibility criteria problematic. The TAA statute clearly indicates that for a group of workers to be certified as eligible for the wage insurance program, the workers must lack easily transferable skills and a significant number of the workers must be age 50 or over. Petitioners must apply for wage insurance coverage when the petition is submitted to Labor, and as part of the investigation process, employers must confirm that their workers lack easily transferable job skills. The TAA statute also clearly states that to be individually eligible for wage insurance payments, workers must obtain reemployment within 26 weeks of layoff and may not receive TAA-funded training. According to Labor, it has been difficult to implement the wage insurance provision because of eligibility criteria that include the requirement that workers must lack easily transferable job skills. As a result of these eligibility requirements, according to Labor, the only workers who are likely to qualify for payments are those who take low-skill jobs at significant pay cuts, and for whom the $10,000 maximum subsidy falls far short of compensating them for their wage loss. On the other hand, some workers who have some transferable skills, can find jobs paying closer to their prelayoff wage, and need only temporary financial assistance may be denied access to the program. According to Labor, most denied wage insurance requests result from failure to meet this eligibility requirement. Officials in one state and employers in two other states also found the wage insurance eligibility criteria problematic. For example, officials in one state we visited told us that the eligibility criteria requiring workers to lack transferable job skills yet still find employment exclude workers who can find reemployment quickly but at lower wages, and who therefore could be well served by a wage insurance benefit. In another case, an employer told us that several administrative workers were laid off because of a plant closure and were able to find new jobs that required the same job skills, but at a much lower pay level because they no longer had job seniority. These workers could have benefited from the program, according to their employer, but were denied the subsidy because they had transferable skills. In addition, a state official we visited reported that an employer found that it was difficult to assess the skill levels of an entire group of affected workers who often possess a diverse set of skills and skill levels. At this stage of implementation, it is unclear how many workers will take advantage of the wage insurance benefit. Most states did not fully implement their wage insurance programs in calendar year 2003, and some do not expect to have their systems implemented until September 2004. Only 19 states implemented their wage insurance programs during 2003; most of the remaining states have implemented or expect to implement their programs during 2004 (see fig. 6). In addition, it is unknown how many workers are currently utilizing wage insurance benefits. Of 1,962 TAA petitions approved during fiscal year 2003, 60 included approved requests for the wage insurance program—but at the time we conducted our data collection, Labor’s Division of Trade Adjustment Assistance had no data on the number of older workers enrolled in the wage insurance program. Labor is now collecting data on the number of workers enrolled in the wage insurance program and will assess the implementation issues associated with the wage insurance provision. Demand for TAA services has increased in recent years, and states have responded by using other federal resources to supplement available TAA funds. States have struggled to meet the higher demand with the TAA resources available to them, and some states have temporarily discontinued enrolling TAA-eligible workers in training, partly because of funding shortfalls. A perception that all TAA-eligible workers are entitled to training has contributed to problems with managing TAA training funds. However, Labor has encouraged states to take various steps to manage their limited TAA resources more effectively and to avoid treating training as the best option for all participants, and many states have taken steps to control their TAA training expenditures through efforts such as a more careful screening of workers’ training needs. Most states’ primary response to the increased demand for training has been to supplement their TAA funds with other federal resources, although some barriers remain to the integration of TAA with other federal programs. Demand for TAA assistance increased substantially between fiscal years 2001 and 2002, as measured by the estimated number of workers certified and the number of workers entering training. After increasing in fiscal year 2002, the number of workers certified and the number of workers entering training did not experience a further substantial increase in fiscal year 2003. According to Labor’s data, an estimated 270,000 workers were certified as eligible for TAA services in fiscal year 2002, a roughly 65 percent increase from 2001 and the largest number in any year since at least fiscal year 1995. The estimated number of certified workers then fell to about 200,000 in fiscal year 2003 (see fig. 7). Similarly, the number of eligible workers entering training annually increased in fiscal year 2002 to about 45,000, a 51 percent increase over fiscal year 2001 (see fig. 8). The increase in program demand in fiscal year 2002 coincided with a sharp decline in manufacturing employment that preceded the implementation of the TAA Reform Act of 2002. After having been relatively steady since 1995, manufacturing employment began to decline in 1999, and the steepest decline occurred between fiscal years 2001 and 2002—from about 16.8 million to about 15.5 million employees, almost an 8 percent drop (see fig. 9). According to the Congressional Budget Office, increased competition from imports is at least partially responsible for this decline in manufacturing employment, coupled with the recession in 2001 and other factors such as productivity improvements and reduced demand for manufactured goods. The increase in demand for TAA services may be more directly linked to the decline in manufacturing employment, insofar as it was related to international trade, than to the TAA Reform Act of 2002. While demand for TAA services increased substantially during fiscal year 2002, most provisions of the TAA Reform Act of 2002 did not take effect until early in fiscal year 2003. Many states report that available TAA training funds are not sufficient to meet the increased demand for services. Most states anticipate that in fiscal year 2004 they will have difficulties meeting the demand for TAA training with TAA training funds alone—even though the amount of funds available nationally for TAA training was doubled from $110 million to $220 million between fiscal years 2002 and 2003. According to our survey, 35 states expect that available TAA training funds for fiscal year 2004 will not cover the amount they will obligate and spend for TAA-eligible workers during the fiscal year. Eighteen states estimate this gap at over $1 million. A factor that has contributed to the difficulty states face in meeting increased demand is the perception that training is an entitlement for TAA- eligible workers. According to the TAA statute, a TAA-eligible worker is entitled to training if six training approval criteria are met, including the requirements that there is no suitable employment available for the worker and that the training is available at a reasonable cost. These criteria give states some discretion in determining which TAA-eligible workers should receive training. However, officials in four of the five states we visited said training has historically been viewed as an entitlement for the majority of TAA-eligible workers and that this perception persists among some case managers and unions. For example, an official in one state said some case managers responsible for the TAA program tend to approve training whenever a certified worker requests it, because they think these workers are entitled to training. This view may complicate efforts to manage limited TAA training funds. Two officials we talked with said training is seen as an entitlement because suitable employment has been defined through regulation as employment paying at least 80 percent of a worker’s prelayoff wages. Most TAA-eligible workers, according to one of these officials, have high prelayoff wages but job skills that don’t readily transfer to a new job, so they would need training to obtain employment paying 80 percent of their prelayoff wages. Partly in response to the limited TAA training funds available to meet the demand for training, some states have temporarily discontinued enrolling TAA-eligible workers in training for periods of time. Nineteen states reported that, at some point between fiscal years 2001 and 2003, they temporarily discontinued enrolling TAA-eligible workers in training because they lacked adequate TAA training funds. Six states reported that they have taken this step during fiscal year 2004. These periods of enrollment deferral may make it more difficult for workers to complete their training programs. Pennsylvania, for example, stopped enrolling newly eligible workers in training for a 3-month period during fiscal year 2003 following more than a year of unusually high demand for TAA services. Workers seeking training during this period were given training waivers so they could continue to collect extended income support. When the state received additional TAA training funds from Labor, it encouraged these workers to register for training and many did so. However, those workers who enrolled in training had used up 3 months of extended income support payments while waiting for training funds to become available. As a result, they had fewer months of income support remaining to complete their training programs, and officials are concerned that they could be forced to drop out of their programs when they run out of extended income support payments. Since 2002, Labor has taken several steps intended to help states better manage their TAA training resources at a time of increased demand. Labor has encouraged states to put more emphasis on up-front assessment of workers’ employment and training needs, so they can provide workers with job search assistance rather than long-term training when appropriate. Also, Labor has changed its approach to distributing TAA training funds among the states. In the past, states requested TAA training funds from Labor throughout the fiscal year as their needs arose. In fiscal year 2004, for the first time, Labor allocated a portion of TAA training funds among the states according to a formula. It allocated 75 percent of available TAA training funds among the states at the beginning of the fiscal year, based on states’ historical training allocations and historical number of participants, and held the remaining 25 percent in reserve to help states that experience large and unanticipated trade-related layoffs. Labor’s goals in developing this new allocation approach were to give states a better idea of the training resources available to them, so they could more effectively plan for and budget their training expenditures, and to ensure that funds are distributed among states according to their needs. (App. IV contains information on the training funds received by each state in fiscal years 2001 through 2003, and each state’s fiscal year 2004 formula allocation.) Finally, Labor has encouraged states to obligate the TAA training funds they receive in a fiscal year only for training costs that will actually be incurred during that fiscal year, rather than for the full costs of training programs that span multiple fiscal years. One of the main goals of this effort, according to Labor officials, is to discourage states from tying up current year funds for future training costs that may not be incurred if workers drop out of training. Many states are now making efforts to more carefully manage their TAA training expenditures. More than half the states have developed new guidelines for enrolling participants since fiscal year 2001, including 21 that have taken this step during fiscal year 2004. Four of the five states we visited told us that they are making an effort to have case managers more carefully assess whether training is the most appropriate strategy for each TAA-eligible worker. Also, many states report that since 2001 they have tried to control the amount of training funds expended per TAA- eligible worker. Almost half the states have tried to control training costs by enrolling TAA-eligible workers in shorter-term training. States are also reducing the maximum amount that may be spent on training for each TAA-eligible worker. According to our survey, 37 states have established a cost limit on the amount that may be spent on training for each TAA participant, ranging from $3,500 to $25,000 (see fig. 10). Nine of these states reduced their cost limits between fiscal years 2001 and 2003 as a way to manage their TAA training funds, and 6 states have taken this step during fiscal year 2004. For example, Pennsylvania reduced its cost limit per TAA participant from $20,000 to $16,000 during fiscal year 2003, as part of its efforts to control costs. About half the states reported that since 2001 they have changed their approach to obligating TAA training funds and are now obligating current year funds only for current year training costs. Twenty-three states reported that they have taken this step in fiscal year 2004 alone. (See fig. 11 for the number of states that have taken the steps discussed above. See app. V for a detailed listing of steps taken by each state.) In addition to making changes in how they manage their TAA funds, states have also been turning to other federal resources to help provide case management and training to TAA-eligible workers. Labor has encouraged states to combine TAA with other federal programs to serve TAA-eligible workers, through written guidance and a series of regional forums for state officials. In response to limited TAA funds, almost all states—46— reported on our survey that they have been co-enrolling TAA participants in the WIA program for job search or training since 2001. States are also increasingly using WIA national emergency grant funds to provide services, including training and case management, to trade-affected workers. The amount of national emergency grant funds awarded annually to states specifically to serve TAA-eligible workers more than doubled from about $50 million per year in fiscal years 2001 and 2002 to about $120 million in fiscal year 2003. States use several federal funding sources to support case management for TAA-eligible workers, and increasingly are relying on WIA resources for this purpose. States may use their TAA administrative funds—15 percent of their TAA training formula allocations—for case management, but most states we visited said TAA administrative funds were not their main funding source for TAA case management. Only 12 states reported that they distribute TAA administrative funds to local areas to support case managers working directly with TAA participants. In most of the states we visited, officials told us that state Employment Service (ES) staff members have historically been the primary providers of direct case management services to trade-affected workers, and most states also told us that Wagner-Peyser grant funds have been the main funding source for these services. Several states told us that in recent years, they have increased their reliance on WIA to provide case management to TAA-eligible workers, and in the majority of states nationwide WIA and ES staff are now the primary providers of case management services including assessment of workers’ interests and skills, recommendation of training programs, and follow-up with workers during training. Officials in two states said they are relying on WIA to support case management for TAA- eligible workers partly in order to serve the increased number of workers eligible for the program. Officials in two other states said they are using WIA case managers to help meet their goal of more carefully assessing TAA-eligible workers’ training needs, because these case managers have experience with this type of assessment. Most states are combining Wagner-Peyser funds, TAA administrative funds, and different categories of WIA funds to support TAA case management (see fig. 12). Most states—38—reported using three or more different funding sources for TAA case management. Just four states reported that they relied exclusively on a single funding source; two said they used only Wagner-Peyser funds, and two said they used only TAA administrative funds. Officials from several local areas we visited said that within their local areas, they are increasingly taking the same approach to serving all dislocated workers, regardless of the programs in which they are participating. In a local area in Maine, for example, all case managers at the one-stop center—whether state ES or local WIA staff—have been cross-trained on the TAA and WIA programs. Any case manager can serve any dislocated worker, and dislocated workers receive the same case management services regardless of whether they are enrolled in the TAA program or the WIA dislocated worker program. A one-stop center in North Carolina that we visited supports its TAA specialist, an ES staff member, through several funding sources, including Wagner-Peyser grant funds, local WIA funds, national emergency grant funds, and TAA administrative funds. This staff member serves the TAA-eligible workers who come to the one-stop center, as well as some dislocated workers who are enrolled in WIA, and provides each one with similar case management services. In another local area in Pennsylvania, trade-affected workers initially meet with an ES staff member who determines their TAA eligibility and provides an orientation to the benefits available through the TAA program. They complete a set of case management activities, including assessment and development of a training plan, which is provided by a combination of ES and local WIA staff members and is required of all dislocated workers. Two local areas we visited that had recently experienced large trade- related layoffs relied on WIA’s national emergency grant funds to support case management services for TAA-eligible workers. A local area in North Carolina, for example, established a temporary one-stop center in a plant that was shut down as a result of trade, and used a portion of its national emergency grant funds to hire temporary ES staff members to help operate this center. A local area in Maine used some of its national emergency grant funds to temporarily hire peer support workers from among the workers affected by the trade-related layoff. These peer support workers provided a range of services, including outreach to affected workers, counseling, and skill assessment. An official told us that affected workers are more likely to trust peer support workers than other case managers because they feel comfortable talking with a colleague who has been through the same layoff experience. In addition to providing case management for TAA-eligible workers, some states also use WIA funds to supplement TAA training funds, and often use the same lists of training providers for TAA as for WIA participants. For example, North Carolina has encouraged its local areas to use their WIA funds whenever possible to support the costs of TAA-eligible workers’ training. State officials feel their TAA training allocation is inadequate to serve the large number of trade-affected workers in the state. A local area in Texas reported that it sometimes combines TAA and WIA funds to pay for a TAA-eligible worker’s training, for example, when the worker’s training program costs more than the state’s cost limit for TAA training. Three states we visited also use national emergency grant funds to support training for TAA participants. According to our survey, 41 states have applied for national emergency grant funds to supplement their TAA training funds since 2001. In most states, workers are generally choosing from the same list of training providers whether they are TAA or WIA participants. Fourteen states reported that training programs approved for TAA participants must be on the state’s WIA Eligible Training Provider List, and an additional 23 states reported that most training programs approved for TAA participants are on the state’s list. While some states report making use of these other funding sources, some officials also told us that WIA’s performance measures create an obstacle to improved coordination between the programs. States and local areas are held accountable for the employment outcomes of workers who receive services through their WIA dislocated worker funds, including the proportion of participants who obtain employment and the difference between participants’ wages in their old and new jobs. States and local areas receive financial incentives and sanctions based on their ability to meet their goals on these performance measures. Officials in three states we visited reported that WIA performance measures create a disincentive to co-enroll TAA-eligible workers in WIA services. For example, an official in one state said local WIA administrators often perceive trade-affected workers as having high prelayoff wages but skills that are not readily transferable, and therefore as having little chance of replacing their prelayoff wages in a new job—one of several WIA performance measures. Local officials are reluctant to enroll TAA-eligible workers in WIA, out of concern that these workers will negatively affect their ability to meet their WIA performance goals. Information on TAA program results has historically been limited, but Labor is making efforts to gather more complete outcome data and to more accurately assess the program’s effectiveness. In 1999, Labor introduced a new participant outcomes reporting system that was designed to collect national information on TAA program outcomes and uses these outcomes to track program performance against national goals. However, in an earlier study we found that information captured by this reporting system was often incomplete and many states did not validate information reported to Labor. Labor has taken steps to improve the accuracy of this information by requiring states to use UI wage records to track outcomes. Some categories of workers, however, are not included in these wage records and most states do little to supplement wage record data with other data sources. As a result, program outcomes may be understated. To evaluate the effects of the TAA program, Labor completed a study of the program in 1993. However, because of methodological issues and recent reforms to the program, the study’s conclusions are of limited usefulness in assessing the current program. Labor recently initiated a new 5-year study and expects the first of several interim reports by mid-2005. Labor has taken steps to improve the accuracy of TAA program information captured by its participant outcomes reporting system, but weaknesses persist. In an effort to improve information on the TAA program, in fiscal year 1999 Labor introduced a new participant outcomes reporting system, the Trade Act Participant Report (TAPR), that was designed to collect national information on TAA program participants, services, and outcomes, such as employment, employment retention, and wages. States are required to submit quarterly summary reports on participants who are no longer receiving any TAA program services. In an earlier study, however, we found that some states reported incomplete data on program outcomes and failed to validate participant information reported to Labor. As a result, program information may have been inaccurate. States reported that they relied heavily on participant surveys to collect information on program outcomes such as employment and earnings and that participants often did not return these surveys. Furthermore, some states reported that they were unable to report more complete information because they lacked the resources to expand their data collection efforts to better capture program outcomes. Similarly, Labor’s Inspector General also found that information on participants and program outcomes collected in TAPR was inadequate for evaluating the program’s performance against national goals. In response to concerns about the reliability of data reported on TAA participants, Labor has taken steps to improve the information captured in its participant outcomes reporting system by incorporating wage records data, but some states may not be accessing all available wage data. In fiscal year 2001, Labor began requiring states to use UI wage records to report outcomes for TAA program participants. While wage records generally provide objective and accurate information to track workers’ employment and earnings, the data have limitations that may contribute to understating of program outcomes. For example, state wage records only capture information on workers who get jobs in that state and states cannot easily access wage record information from other states. As a result, states may not be able to provide outcome information for TAA program participants who gained employment in another state. To help track employment of TAA participants across state lines, some states are using the Wage Record Interchange System (WRIS), a data clearinghouse used under WIA that allows states to share their wage record data. Since June 2002, states could use WRIS for reporting TAA outcomes, but it is unknown how many states are using or plan to use this system. While Labor officials told us that states are encouraged to use WRIS to obtain more complete employment and earnings information on TAA program participants, Labor could not provide information on how many states are actually using this data clearinghouse to track former TAA program participants because it does not have a mechanism in place to identify these states. Officials in four of the five states we visited reported that they are using WRIS to track program participants’ employment and earnings outcomes. Some individuals may not be captured by wage record data. Wage records, which cover about 94 percent of workers, do not include some categories of workers such as the self-employed, most independent contractors, military personnel, federal government employees, and postal service employees. Most states do little to supplement wage record data with other data sources despite the fact that such information can be reported to TAPR, and, as a result, program outcomes may be understated. Only 12 states reported that they collect data on outcomes such as employment, earnings, or employment retention beyond what is required for TAPR. Nine of these states reported collecting information on whether participants find jobs after they leave the program (see fig. 13). This information is generally collected through telephone interviews or mail surveys of workers. Officials from two of these states reported that this information is generally used as a local program management tool to gauge the effectiveness of training programs or providers rather than to collect more complete and accurate data for TAPR. In contrast, in a recent study of WIA outcomes, we found that 39 states collect additional data to more completely track the outcomes of WIA participants and to help them manage their programs locally. Labor tracks TAA program outcomes against national goals, but the TAA program has not met all of its goals in any given year. Since fiscal year 2000, Labor has used outcomes that states report to TAPR to track program performance against national goals related to employment, wages, and job retention. For example, performance goals set for fiscal year 2003 included having 78 percent of all participants find employment. While Labor has exceeded some of its goals in previous years, it has never met all of its goals in any given year. Furthermore, according to Labor’s outcome data, none of the TAA performance goals set for fiscal year 2003 were met (see table 3). In fiscal year 2004, Labor announced its new initiative to implement a reporting system that would collect and report program performance for all workforce programs administered by Labor, including TAA. This single system is intended to reduce barriers to greater service integration across federal workforce programs, and Labor also expects it will increase the reliability of its performance data by standardizing measurements such as employment, job retention, and earnings across all programs. The majority of outcomes data will still be collected from wage records. However, Labor officials also reported that states would be able to submit supplemental information on program participants whose employment status and wages are not captured in wage records. These supplemental data, however, will not be included in annual performance outcomes calculations. No information is currently available to accurately measure program effectiveness. However, Labor has recently taken steps to better evaluate the effect of TAA services on participants. While outcomes measures are an important component of program management in that they assess whether a participant is achieving an intended outcome—such as obtaining employment—they cannot, by themselves, measure whether the outcome is a direct result of program participation. Other influences, such as the condition of the local economy, may affect an individual’s ability to find a job as much or more than participation in an employment and training program. In order to determine whether participant outcomes are the effects of a program, rather than of other factors, it is necessary to conduct an impact evaluation. Labor last completed an evaluation of the TAA program in 1993 when it analyzed the impact of TAA services, particularly training, on participants’ employment, job retention and earnings outcomes. The study compared TAA participants with a sample of dislocated worker non-participants with similar prelayoff characteristics. According to the study’s findings, TAA program participants tended to have longer periods of joblessness than other dislocated workers. Furthermore the study found that among TAA program participants, certain participants—including women or those with limited education—experienced especially long periods of unemployment (see app. VI for an overview of demographic characteristics of recent TAA participants). However, methodological issues resulted in inconclusive findings regarding the impact of training on TAA program participants’ employment and earnings. In addition, Labor officials told us that because program benefits and services were significantly changed in 2002, the study’s conclusions are of limited use in assessing the current program. Labor initiated a new 5-year study of the TAA program in 2004, and while details of this study are still being determined, the study is expected to consist of three phases. The first phase will be a study of the initial implementation of the TAA Reform Act. The longer-term phases of the study include a quasi-experimental impact study and an in-depth study of program administration that will identify promising practices and data collection issues. The second phase of the study will measure the effects of program services such as training on participants’ employment, earnings, and employment retention. The current plans include collecting data from interviews and administrative records for both TAA program participants and a comparison group of UI claimants, which will be matched to participants using a technique that allows researchers to more readily identify appropriate comparison groups. According to Labor officials, the methodology expected to be used in this study to identify comparison groups is an improvement over the methodology used in the previous study and should provide them with more conclusive findings about the impact of TAA services on participants. Although this is a long-term study, several interim reports are expected. The first of several interim reports is anticipated in mid-2005, and Labor expects to issue the final report in 2009. International trade is at least partially responsible for the decline in manufacturing over the last several years in the United States. Workers affected by trade may face greater barriers to reemployment than workers laid off for other reasons, for example because trade-affected workers are often older than other dislocated workers. By providing training and extended income support, the TAA program is intended to help workers laid off because of international trade obtain reemployment. The TAA Reform Act of 2002 changed the program in several ways that were intended to improve and expand services for trade-affected workers. At this early stage in implementation, several changes appear to be helping trade-affected workers. The clearest positive effect so far is that trade- affected workers are enrolling in services sooner, because of the new time limit on Labor’s processing of TAA petitions and the new deadline for workers to enroll in training. It is too early to tell what will be the results of some changes, for example, how many workers will take advantage of the new wage insurance benefit. Meanwhile, states report that certain provisions of the Reform Act have presented implementation challenges. The new training enrollment deadline may be causing some workers to lose their eligibility for extended income support, making it more difficult for them to complete the training they may need to obtain reemployment at wages comparable to their prelayoff wages. The new enrollment deadline may also be preventing some workers from receiving thorough assessments of their training needs and enrolling in the most appropriate training. Furthermore, these difficulties may be heightened in the cases of very large layoffs. Some officials report that eligibility requirements for the new HCTC have increased their administrative workload by causing them to spend more of their resources issuing training waivers just to facilitate workers’ eligibility for the tax credit. Resources spent on issuing training waivers may be detracting from time invested in providing workers with needed job placement and training assistance. Furthermore, some find the eligibility criteria for the wage insurance program problematic, for example because the criteria require workers to lack easily transferable skills yet find re- employment without TAA-funded training. These eligibility criteria could be resulting in the denial of wage insurance payments to some workers who could benefit from the program. We recommend that Labor monitor issues related to the implementation of certain provisions of the TAA Reform Act that may have had unintended consequences for some workers, and propose legislative changes as deemed necessary. In particular, Labor should track over time the following: the ability of workers to meet the new training enrollment deadline and of states and local areas to provide appropriate assessments to all trade- affected workers within the deadline, especially when responding to very large layoffs, and whether the eligibility criteria for the new wage insurance program are resulting in the denial of services to some older workers who could benefit from the program. We provided a draft of this report to officials at Labor for their review and comment. In its comments, Labor did not raise any issues with our findings, conclusions or recommendations. Labor provided technical comments, which we include as appropriate. Labor’s comments are reproduced in appendix VII. We are sending copies of this report to the Secretary of Labor, relevant congressional committees, and others who are interested. Copies will also be made available to others upon request. The report is also available at no charge on GAO’s Web site at http://www.gao.gov. Please contact me on (202) 512-7215 if you or your staff have any questions about this report. Other major contributors to this report are listed in app. VIII. We were asked to provide information on (1) how key provisions of the Trade Adjustment Assistance (TAA) Reform Act have affected program services, (2) what have been the challenges in implementing the TAA Reform Act’s new provisions, (3) whether demand for TAA training has changed, and how states are meeting this demand, and (4) what is known about what the TAA program is achieving. To address these questions, we conducted a Web-based survey of all 50 state workforce agencies that administer the TAA program and Puerto Rico. We conducted site visits to 5 states—Maine, North Carolina, Pennsylvania, Texas, and Washington— and interviewed state and local officials in each state. We reviewed data and documents from the U.S. Department of Labor (Labor) and other sources. We also interviewed officials from Labor, the AFL-CIO, the National Association of State Workforce Agencies, and the Congressional Research Service. To collect broad information on TAA Reform Act implementation and states’ management of their training funds, we surveyed state officials from the 50 states and Puerto Rico in March, 2004. Washington, D.C. was not surveyed because it did not have a TAA program. This structured survey was administered via e-mail and the Internet and had a 98 percent response rate, including responses from all 50 states. The survey was designed to obtain information on the following: Labor and state efforts to reach out to new categories of eligible workers such as secondary workers, the effect of new training enrollment deadlines on services to participants, and obstacles that states faced in implementing new provisions in the TAA Reform Act, including the Health Coverage Tax Credit and the wage insurance provision. The survey also included questions on other sources of funds used to support services for TAA participants and the extent to which states collect outcome data that is more up to date and accurate than the data required by Labor. Because this was not a sample survey, there are no sampling errors. However, the practical difficulties of conducting any survey may introduce other errors, commonly referred to as nonsampling errors. For example, difficulties in how a particular question is interpreted, in the sources of information that are available to respondents, or in how the data are entered into a database or were analyzed can introduce unwanted variability into the survey results. We took steps in the development of the questionnaire, the data collection, and the data analysis to minimize these nonsampling errors. For example, GAO survey specialists designed the questionnaire in collaboration with GAO staff with subject matter expertise. Then, the draft questionnaire was pretested with three state officials to ensure that the questions were relevant, clearly stated, and easy to comprehend. When the data were analyzed, a second, independent analyst checked all computer programs. Since this was a Web-based survey, respondents entered their answers directly into the electronic questionnaire. This eliminates the need to have the data keyed into a database, thus removing an additional source of error. We selected 5 states for site visits according to several criteria, including experience with large numbers of TAA participants in recent years, representation of a range of adversely affected industries, states recommended by Labor either as models in implementing TAA or as states facing implementation challenges, and geographic diversity (see table 4). In each state we interviewed state officials on topics including TAA Reform Act implementation, management of TAA training funds, and coordination between TAA and other federal programs. Combined, the 5 states constituted about 36 percent of the national total of TAA participants from fiscal years 2000 through 2002 (see fig. 14). We judgmentally selected two local areas in each state and visited a mix of urban and rural areas (see table 5). We met with local officials, program participants, employers, and workforce investment board members. We collected information on how local areas are implementing provisions of the TAA Reform Act and how they are coordinating Workforce Investment Act and TAA funds. We reviewed data from Labor on petitions, participants, services, performance, and expenditures from fiscal year 1999 to fiscal year 2003. For fiscal year 2003, we broke out data on petition-processing times between workers served prior to the TAA Reform Act and those served after implementation of the Reform Act in an attempt to isolate the effects of program changes. We assessed the reliability of key data by interviewing Labor officials, reviewing Labor documentation, and performing edit checks of computer-based data. We found some limitations in these data but judged the data to be sufficiently reliable for the purposes of our reporting objectives. In particular, some data on certified workers and on the number of workers entering training annually may have inaccuracies, but we believe these data to be sufficiently reliable for the purpose of demonstrating trends over time, the main focus of our reporting objective. Data that were used for background purposes and provided in app. VI were not independently verified. Workers whose petitions for certification of TAA eligibility are denied by the U.S. Department of Labor may seek judicial review of Labor’s decision by filing an appeal with the U.S. Court of International Trade. Workers may file such an appeal either after Labor’s negative determination on the initial petition or after Labor’s negative determination on a reconsideration of its determination. The U.S. Court of International Trade may affirm the action of the Department of Labor, set it aside in whole or in part, or return—termed remand—the case to Labor to take further evidence. Appendix III: Certified Workers, Benefit Recipients, and Expenditures The data used for this table are estimates of the number of workers certified as eligible for TAA, based on estimates of the number of affected workers submitted by companies at the time TAA petitions are filed with the Department of Labor. At the time petitions are submitted, companies may not know exactly how many workers will be affected. We use these estimates because the Department of Labor does not collect data on the number of workers ultimately certified. This figure is an underestimate of the total number of workers entering training, because some states do not capture all workers entering training in the data they submit to Labor. Includes costs of tuition, transportation, subsistence, and related expenses for all workers who received training during the year. States may pay some of these costs through funding sources other than TAA, such as WIA funds. Prior to fiscal year 2004, Labor awarded TAA training funds to states based on their requests throughout the fiscal year. In fiscal year 2004, Labor allocated 75 percent of available training funds among the states at the beginning of the fiscal year according to a formula. The amounts allocated to states at the beginning of fiscal year 2004 are their base allocations. Labor held the remaining 25 percent of available training funds in reserve to help states respond to large and unanticipated layoffs throughout the year. States are eligible to submit requests for 25 percent reserve funds only after they have expended 50 percent of their base allocations. X year funds only for current year () indicates state anticipates taking step during fiscal year 2004. The survey was fielded in March 2004, therefore these results reflect steps states have taken during the first six months of fiscal year 2004 and steps states anticipate taking during the last six months of fiscal year 2004. Through the Trade Act Participant Report (TAPR), states regularly submit data to Labor on the demographic characteristics of TAA participants. The data provided below are for participants who completed program services or stopped receiving services between July 1, 2001, and June 30, 2002. These data include workers who received services under either or both the TAA program and the NAFTA-TAA program. Irene J. Barnett and Eric Clemons made significant contributions to this report in all aspects of the work throughout the assignment. In addition, Stuart Kaufman assisted in the design of the state survey, George Quinn Jr. assisted in the analysis of survey data, Ray Wessmiller assisted in the analysis of data collected from the Department of Labor, and Shana Wallace contributed to the development of the report’s overall methodology. Jessica Botsford and Richard Burkard provided legal support, and Corinna Nicolaou assisted in the message and report development. Workforce Investment Act: States and Local Areas Have Developed Strategies to Assess Performance, but Labor Could Do More to Help. GAO-04-657. Washington, D.C.: June 1, 2004. National Emergency Grants: Labor Is Instituting Changes to Improve Award Process, but Further Actions Are Required to Expedite Grant Awards and Improve Data. GAO-04-496. Washington, D.C.: April 16, 2004. Workforce Investment Act: One-Stop Centers Implemented Strategies to Strengthen Services and Partnerships, but More Research and Information Sharing is Needed. GAO-03-725. Washington, D.C.: June 18, 2003. Older Workers: Employment Assistance Focuses on Subsidized Jobs and Job Search, but Revised Performance Measures Could Improve Access to Other Services. GAO-03-350. Washington, D.C.: January 24, 2003. Workforce Investment Act: Better Guidance and Revised Funding Formula Would Enhance Dislocated Worker Program. GAO-02-274. Washington, D.C.: February 11, 2002. Workforce Investment Act: Improvements Needed in Performance Measures to Provide a More Accurate Picture of WIA’s Effectiveness. GAO-02-275. Washington, D.C.: February 1, 2002. Trade Adjustment Assistance: Experiences of Six Trade-Impacted Communities. GAO-01-838. Washington, D.C.: August 24, 2001. Trade Adjustment Assistance: Trends, Outcomes, and Management Issues in Dislocated Worker Programs. GAO-01-59. Washington, D.C.: October 13, 2000.
The Trade Adjustment Assistance (TAA) Reform Act of 2002 consolidated two programs serving trade-affected workers and made changes to expand benefits and decrease the time it takes for workers to get services. GAO was asked to provide information on (1) how key reform provisions have affected program services, (2) what have been the challenges in implementing new provisions, (3) whether demand for TAA training has changed and how states are meeting this demand, and (4) what is known about what the TAA program is achieving. Most workers are enrolling in services more quickly than in prior years, partly because of a new 40-day time limit Labor must meet when processing a request, or petition, for TAA coverage. Labor reduced its average petition-processing time from 107 days in fiscal year 2002 to 38 days in fiscal year 2003 after the Reform Act took effect. Also, most states reported that workers are enrolling in training sooner because of a new deadline requiring workers to be enrolled in training by the later of 8 weeks after petition certification or 16 weeks after a worker's layoff. However, this deadline may have negatively affected some workers--especially during large layoffs--as it does not always leave enough time to assess workers' training needs. States reported challenges implementing some new provisions of the TAA Reform Act. Officials in most of the states we visited reported an increased administrative workload from issuing training waivers to allow workers to qualify for the Health Coverage Tax Credit (HCTC)--over 40 percent more waivers were issued in fiscal year 2003 than in 2002. While officials in all the states we visited said workers are or are likely to be interested in the wage insurance provision (Alternative TAA, or ATAA) that supplements the wages of certain workers aged 50 and over, it is still unclear how many workers will take advantage of this benefit. However, some found the provision's eligibility criteria problematic, partly because they require workers to lack easily transferable skills yet find reemployment within 26 weeks of layoff. Demand for TAA training increased substantially in fiscal year 2002, prior to the implementation of the reforms. States have struggled to meet this higher demand with available TAA training funds, even though TAA training funds available nationally doubled between fiscal years 2002 and 2003. Most states have responded by using other federal employment and training resources. Information on TAA program results has been limited, but Labor is making improvements by requiring states to use wage records to track TAA outcomes. Labor also initiated a new, 5-year evaluation study.
From fiscal years 2005 through 2011, the physical condition of the Coast Guard’s legacy vessels was generally poor. A primary Coast Guard measure of a vessel’s condition—the operational percent of time free of major casualties—shows that the high endurance cutters, medium endurance cutters, and patrol boats generally remained well below target levels from fiscal years 2005 through 2011. For example, over this 7- year period, the operational percent of time free of major casualties averaged about 44 percent for the high endurance cutters and about 65 percent for the medium endurance cutters versus a target of 72 percent; and the patrol boats averaged approximately 74 percent versus a target of 86 percent. Other evidence, such as our review of vessel condition assessments and inspections the Coast Guard conducts of the legacy vessels, also shows that the condition of the legacy vessel fleet is generally declining. For example, a variety of Coast Guard assessments show that legacy vessels’ critical operating systems—such as main diesel engines—have been increasingly prone to mission-degrading casualties. In addition, Coast Guard senior maintenance officials and vessel crew members we interviewed noted increased maintenance challenges because of the advanced age of the legacy vessels. In particular, the maintenance managers for both the high endurance and medium endurance cutters reported that the performance of critical systems on these legacy vessel classes has become increasingly unpredictable and refurbishments of these vessel classes’ least reliable systems have brought limited returns on the investments. Maintenance officials and vessel crew members also reported devoting increasing amounts of time and resources to troubleshoot and resolve maintenance issues because some systems and parts on these legacy vessel classes are obsolete. The Coast Guard has taken two key actions to improve the condition of its legacy vessels. First, in 2009, the Coast Guard reorganized its maintenance command structure to focus on standardization of practices. Under this reorganization, the Coast Guard eliminated its two Maintenance and Logistics Commands and replaced them with a centralized command structure—the Surface Forces Logistics Center— whereby a single product line manager oversees the maintenance of similar classes of vessels. Coast Guard officials reported that this change was made to enable better oversight of the condition of entire classes of the vessel fleet, reduce the workload on vessel crews by providing centralized support for procurement of replacement parts, and implement centralized maintenance plans to address commonly occurring Second, Coast Guard officials also reported that the Coast casualties.Guard was on schedule to complete a 10-year, almost half-billion dollar set of sustainment projects to refurbish selected patrol boats and upgrade medium endurance cutters, known as Mission Effectiveness Projects, which are intended to improve legacy vessel operating and cost performance. Our July 2012 report provides additional information regarding these actions but, as noted in the report, the condition of these legacy vessels continues to decline despite these efforts. Expenditures for the two key types of legacy vessel annual depot-level maintenance—scheduled and unscheduled maintenance—declined from fiscal year 2005 to fiscal year 2007, and then rose from fiscal year 2007 to fiscal year 2011. For example, scheduled maintenance expenditures rose from about $43 million in fiscal year 2007 to about $70 million in fiscal year 2011. Coast Guard officials attributed the increase in scheduled maintenance expenditures to better identifying maintenance needs, increasing the prioritization of completing all scheduled maintenance, and the receipt of supplemental funding. In contrast, unscheduled maintenance expenditures varied by vessel class from fiscal years 2005 through 2011, but the high endurance cutter fleet consistently incurred the greatest share of unscheduled maintenance expenditures. For example, high endurance cutters accounted for 86 percent of all unscheduled maintenance expenditures in fiscal year 2011. Coast Guard officials attributed the comparatively high unscheduled maintenance expenditures to the high endurance cutters’ advanced age and size. According to Coast Guard officials, Standard Support Levels are established when a vessel class enters service or undergoes a service life extension program. For example, the Coast Guard reset the Standard Support Level for the high endurance cutters after conducting a service life extension program between 1987 and 1992—the Fleet Renovation and Modernization Program—but has not reset the Standard Support Levels for the medium endurance cutters or patrol boats. Coast Guard officials indicated that the Coast Guard increases Standard Support Levels using non-pay inflation, but it has not done so every year. noting that supplemental funding had been critical to enable the Coast Guard to fund necessary maintenance for the legacy vessel fleet. Our July 2012 report provides further information regarding the Coast Guard’s annual depot-level maintenance expenditures. Our review found that the Coast Guard’s process for estimating legacy vessel annual depot-level maintenance costs does not fully reflect relevant best practices. GAO’s Cost Estimating and Assessment Guide states that a high-quality and reliable cost estimate includes certain best practice characteristics. We determined that the three characteristics relevant to the Coast Guard’s cost estimation process are that the process should be (1) well-documented, (2) comprehensive, and (3) accurate. Our assessment showed that the Coast Guard’s legacy vessel maintenance cost-estimating process partially met the three characteristics, as follows: Partially comprehensive: The Coast Guard’s process for estimating annual legacy vessel depot-level maintenance costs defines the program, among other things, but does not document all cost- influencing ground rules and assumptions (e.g., inflation rate). Partially well-documented: The Coast Guard’s process for estimating annual legacy vessel depot-level maintenance costs discusses the technical baseline description, and the data in the baseline are consistent with the estimate; however, the Coast Guard did not provide documentation that discusses key cost estimating factors, such as how the data were normalized or the reliability of the data. Partially accurate: The Coast Guard’s process for estimating annual legacy vessel depot-level maintenance costs contains few, if any, minor mathematical mistakes and is regularly updated to reflect significant program changes and current status. However, we assessed the cost estimate as being not fully accurate because Coast Guard officials could not provide us with documentation that would allow us to assess the reliability of the historical data used, the accuracy of the calculations, the relationship of the data to the historical contractor bids, or the final estimates for all maintenance costs. To address these issues, in our July 2012 report, we recommended that the Secretary of Homeland Security direct the Commandant of the Coast Guard to ensure that the Coast Guard’s annual depot-level maintenance cost estimates conform to cost estimating best practices. DHS concurred with this recommendation and described actions the Coast Guard has taken or plans to take, but these actions may not fully address the intent of this recommendation. For example, DHS noted that given current fiscal constraints, the Coast Guard will focus on improvements that do not require additional resources. While we agree that federal resources are limited, aligning the cost estimating process for legacy vessel maintenance with best practices would not necessarily require an increased investment of resources. Rather, having a well documented cost estimating process and using accurate historical data should enable the Coast Guard to operate more efficiently. The operational capacity of the Coast Guard’s legacy vessel fleet declined from fiscal years 2006 through 2011. In particular, while performance varied across the legacy vessel classes, two key Coast Guard metrics—operational hours and lost cutter days—show that the legacy vessels did not meet their operational capacity targets and lost considerable planned operational time. For example, the high endurance cutters and 210-foot medium endurance cutters did not meet any of their operational hour targets from fiscal years 2006 through 2011, and the 270-foot medium endurance cutters met their targets only in fiscal year 2008. Specifically, operational hours for the high endurance cutters declined by about 32 percent from fiscal year 2008 to 2011, and the combined operational hours of the 210-foot and 270-foot medium endurance cutters declined nearly 21 percent from fiscal year 2007 to fiscal year 2011.medium endurance cutters, collectively, averaged about 618 lost cutter days per year from fiscal years 2006 through 2011. Further, the number of lost cutter days for the high endurance cutters has been nearly In addition, Coast Guard data show the high and equivalent to three high endurance cutters being out of service for an entire year in each of the last 3 fiscal years. Moreover, lost cutter days for both the 210-foot and 270-foot medium endurance cutters combined more than doubled, from 122 lost cutter days in fiscal year 2006 to 276 lost cutter days in fiscal year 2010. Coast Guard headquarters officials reported that the declining operational capacity of its legacy vessel fleet— particularly the high and medium endurance cutters—has been a prime contributor to the Coast Guard’s declining ability to meet mission requirements and intercept threats beyond U.S. territorial waters. The Naval Engineering Manual defines remaining service life as the time period during which no major expenditures will be required for hull and structural repairs or modernizations, or for machinery or system modernizations based solely on the vessel’s capability to meet existing mission requirements. also increase the vessel fleet’s operational capacity gap because the Coast Guard will not receive sufficient numbers of replacement vessels during this time period to make up for the lost capacity. The ongoing delivery of replacement vessels is expected to help mitigate the existing operational capacity gap for the legacy high endurance cutter and patrol boat fleets. However, Coast Guard officials reported, and our analysis of Coast Guard documents confirms, that the medium endurance cutter fleet will be most affected by delays in delivery of replacement vessels. The Coast Guard is refurbishing its medium endurance cutters through the Mission Effectiveness Project to increase these cutters’ reliability and reduce longer-term maintenance costs, and third-party assessments show that the performance of those medium endurance cutters that have completed the project has improved. Even if the most optimistic projections were realized, though, and the Mission Effectiveness Project was to extend the medium endurance cutters’ service lives by 15 years, the medium endurance cutters would remain in service increasingly beyond the end of their originally-expected service lives before full deployment of their replacement vessels—the offshore patrol cutters. In particular, according to current plans, some of the 270- foot medium endurance cutters are to remain in service as late as 2033— up to 21 years beyond the end of their originally-expected service lives— before they are replaced. Coast Guard officials reported that a further refurbishment of the medium endurance cutters will be necessary to meet operational requirements and that the Coast Guard is in the early stages of developing plans for addressing the expected gap between remaining medium endurance cutter fleet service lives and the delivery of the replacement offshore patrol cutters. Coast Guard efforts to sustain its legacy vessel fleet and meet mission requirements until the replacement vessels are delivered are also challenged by uncertainties regarding the future mix of vessels, as well as the implementation of a rotational crew concept for the replacement vessel for the high endurance cutters, known as the national security cutter. The Coast Guard’s fiscal year 2013 to 2017 5-year Capital Investment Plan does not allocate funds for the acquisition of the last two replacement national security cutters, as called for by the program of record, and it is unclear how this could affect the decommissioning schedule of the high endurance cutters, the last of which the Coast Guard currently plans to decommission in fiscal year 2023. The Coast Guard has established operational hour targets for the number of hours its vessels are expected to conduct operations or missions each fiscal year and uses these targets to inform planning decisions, such as setting performance targets and corresponding resource allocations. Although senior Coast Guard headquarters officials reported considering various factors when setting overall mission performance targets annually, these officials reported doing so based on the assumption that vessel class assets will achieve 100 percent of their operational hour targets. Our analysis of Coast Guard data, though, makes it clear that the Coast Guard’s legacy vessel fleet has increasingly fallen below operational hour targets in recent years, and this trend is expected to continue. In addition, Coast Guard officials reported that the decline in legacy vessel operational capacity has challenged the Coast Guard’s ability to meet its mission performance targets. Further, Coast Guard operational commanders reported taking actions to mitigate the effect of declining legacy vessel capacity, such as diverting vessels tasked to other missions to help complete operations. Nevertheless, the Coast Guard has not revised legacy vessel operational hour targets because, according to Coast Guard officials, doing so would lower its mission performance targets. However, these targets have gone unmet because of the declining operational capacity of the legacy vessel fleet. Because it sets mission performance targets and allocates resources on the assumption that legacy vessels will achieve 100 percent of operational hour targets, the Coast Guard’s allocation of resources is not realistic. Further, because the Coast Guard uses vessels’ operational hour targets to set agency-wide performance targets and to allocate resources, consistent achievement of its performance targets is at increased risk. In our July 2012 report, we recommended that the Secretary of Homeland Security direct the Commandant of the Coast Guard to adjust legacy vessel fleet operational hour targets to reflect actual capacity, as appropriate by class. DHS did not concur with this recommendation and noted, among other things, that reducing the operational hour targets would fail to fully utilize those assets not impacted by maintenance issues. We disagree with DHS’s position because, as noted in the July 2012 report, while senior Coast Guard officials reported that the Coast Guard adjusts its mission performance targets annually, it does not also adjust legacy vessel operational hour targets annually. These officials also stated that the Coast Guard’s mission performance targets are based on each vessel class’s capacity, with the assumption that each vessel will operate at 100 percent of its planned operating time. Thus, we do not believe that reducing the operational hour targets would result in a failure by the Coast Guard to fully utilize assets not impacted by maintenance challenges and continue to believe that this recommendation has merit. Chairman LoBiondo, Ranking Member Larsen, and members of the subcommittee, this completes my prepared statement. I would be happy to respond to any questions you may have at this time. For questions about this statement, please contact Stephen L. Caldwell at (202) 512-9610 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this statement include Christopher Conrad (Assistant Director) and Michael C. Lenington. Additional contributors include Jason Berman, Chloe Brown, and Lara Miklozek. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
This testimony discusses the condition of the Coast Guard's legacy vessel fleet, and challenges the Coast Guard faces in sustaining these vessels and meeting mission requirements. The Coast Guard, within the Department of Homeland Security, is the principal federal agency responsible for maritime safety, security, and environmental stewardship. The legacy vessel fleet is critical for executing Coast Guard missions, which include defense operations; search and rescue; and securing ports, waterways, and coastal areas. The comments will focus on the legacy 378-foot high endurance cutters, 270-foot and 210-foot medium endurance cutters, and 110-foot patrol boats, and are based on findings from the report we released in July 2012. This testimony summarizes the findings of our July 2012 report and addresses (1) how the physical condition of the Coast Guard's legacy vessel fleet changed from fiscal years 2005 through 2011, and key actions the Coast Guard has taken related to the physical condition of its legacy fleet; (2) key annual maintenance expenditure trends for the legacy vessel fleet, and the extent to which the Coast Guard's cost estimating process has followed established best practices; and (3) the operational capacity of the legacy vessel fleet and the extent to which the Coast Guard faces challenges in sustaining the legacy vessel fleet and meeting mission requirements. For information, contact contact Stephen L. Caldwell at (202) 512-9610 or [email protected] . From fiscal years 2005 through 2011, the physical condition of the Coast Guard's legacy vessels was generally poor. A primary Coast Guard measure of a vessel's condition--the operational percent of time free of major casualties--shows that the high endurance cutters, medium endurance cutters, and patrol boats generally remained well below target levels from fiscal years 2005 through 2011. The Coast Guard has taken two key actions to improve the condition of its legacy vessels. First, in 2009, the Coast Guard reorganized its maintenance command structure to focus on standardization of practices. Under this reorganization, the Coast Guard eliminated its two Maintenance and Logistics Commands and replaced them with a centralized command structure--the Surface Forces Logistics Center--whereby a single product line manager oversees the maintenance of similar classes of vessels. Coast Guard officials reported that this change was made to enable better oversight of the condition of entire classes of the vessel fleet, reduce the workload on vessel crews by providing centralized support for procurement of replacement parts, and implement centralized maintenance plans to address commonly occurring casualties. Second, Coast Guard officials also reported that the Coast Guard was on schedule to complete a 10-year, almost half-billion dollar set of sustainment projects to refurbish selected patrol boats and upgrade medium endurance cutters, known as Mission Effectiveness Projects, which are intended to improve legacy vessel operating and cost performance. Expenditures for the two key types of legacy vessel annual depot-level maintenance--scheduled and unscheduled maintenance--declined from fiscal year 2005 to fiscal year 2007, and then rose from fiscal year 2007 to fiscal year 2011. Further, annual depot-level maintenance expenditures often exceeded the Coast Guard's budgeted funds for depot-level maintenance for the legacy vessels--known as Standard Support Levels--from fiscal years 2005 through 2011. Our review found that the Coast Guard's process for estimating legacy vessel annual depot-level maintenance costs does not fully reflect relevant best practices. GAO's Cost Estimating and Assessment Guide states that a high-quality and reliable cost estimate includes certain best practice characteristics. We determined that the three characteristics relevant to the Coast Guard's cost estimation process are that the process should be (1) well-documented, (2) comprehensive, and (3) accurate. The operational capacity of the Coast Guard's legacy vessel fleet declined from fiscal years 2006 through 2011. In particular, while performance varied across the legacy vessel classes, two key Coast Guard metrics--operational hours and lost cutter days--show that the legacy vessels did not meet their operational capacity targets and lost considerable planned operational time. Coast Guard efforts to sustain its legacy vessel fleet and meet mission requirements until the replacement vessels are delivered are also challenged by uncertainties regarding the future mix of vessels, as well as the implementation of a rotational crew concept for the replacement vessel for the high endurance cutters, known as the national security cutter. The Coast Guard's fiscal year 2013 to 2017 5-year Capital Investment Plan does not allocate funds for the acquisition of the last two replacement national security cutters, as called for by the program of record, and it is unclear how this could affect the decommissioning schedule of the high endurance cutters, the last of which the Coast Guard currently plans to decommission in fiscal year 2023.
The 1986 amendments to RCRA established the LUST Trust Fund to, among other things, finance the cleanup of petroleum releases from underground storage tanks. Until recently, states could use these funds only for cleanup and related administrative and enforcement activities. Within this restriction, trust fund money could be used for the following general categories of activities: testing tanks for leaks when one is suspected; investigating a site to evaluate the source and extent of petroleum assessing the number of individuals that may have been exposed to petroleum contaminants and the seriousness of exposure, and estimating resulting health risks; cleaning up contaminated soil and water; providing safe drinking water to residents at the site of a tank leak; providing for temporary or permanent relocation of residents; and providing reasonable and necessary administrative and planning expenses directly related to these activities. The Energy Policy Act of 2005 (the 2005 Act), enacted in August 2005, expanded the permitted uses of the LUST Trust Fund. It authorizes states to use a portion of their LUST Trust Fund money for inspections and other leak prevention purposes. Furthermore, the 2005 Act authorizes appropriations from the LUST Trust Fund through fiscal year 2011 of $555 million per year for a variety of activities—including release prevention and inspections—in addition to previously authorized purposes. This annual amount includes $200 million for cleanups of releases from leaking underground storage tanks; $200 million for the cleanup of releases of oxygenated fuel additives from such tanks; $100 million for activities including onsite inspections, groundwater protection, and enforcement; and $55 million for delivery prohibition, operator training, and release prevention and compliance. An additional $50 million per year is authorized from the general fund to cover administrative expenses and other activities. Net revenue to the LUST Trust Fund from taxes on petroleum products totaled approximately $190 million in fiscal year 2005. The 2005 Act also included several other provisions regarding inspections, operator training, and financial responsibility, among other things. Some of these provisions impose ongoing requirements on states. For example, the inspection provision requires each state receiving federal funding to inspect all of its regulated underground storage tanks at least once every 3 years, beginning after the state has inspected tanks that have not been inspected since December 1998. Effective February 2007, the 2005 Act directs EPA to require that each state receiving federal funds either (1) require additional, or secondary, structures that would help contain a release (secondary containment) for new and replaced underground storage tanks located near sources of drinking water or (2) require evidence of financial responsibility for tank manufacturers’ and installers’ certification. This coverage would provide for the costs of cleanup directly related to releases caused by improper tank manufacture or installation. The 2005 Act also extended until 2011 the tax on petroleum products that capitalizes the federal LUST Trust Fund. Under EPA policy, except in rare circumstances and in Indian Country, states will address underground storage tank releases that are financed by the LUST Trust Fund under an appropriate cooperative agreement with EPA. EPA will undertake a cleanup only when (1) there is a major public health or environmental emergency, (2) the state is unable to respond, and (3) no responsible party is able or willing to provide an adequate and timely response. In these circumstances, EPA’s involvement is to be limited to stabilizing the immediate situation, with the expectation that further cleanup will be conducted by the state under its cooperative agreement with the agency. States are responsible for overseeing cleanup work performed by the party responsible for the contamination and for performing the cleanup at sites where no responsible party can be found. In addition to the LUST Trust Fund, federal money from EPA’s Brownfields program can be used to clean up sites contaminated by petroleum under certain circumstances. In general, Brownfields grants are limited to sites whose “expansion, redevelopment, or reuse of which may be complicated by the presence or potential presence of a hazardous substance, pollutant, or contaminant.” Only certain governmental organizations, nonprofit organizations, and nonprofit educational institutions are eligible for Brownfields cleanup grants. In fiscal year 2005, EPA provided eligible entities with about $22.3 million in Brownfields grants for cleaning up sites contaminated with petroleum. About $4.0 million of these grants were awarded for direct cleanup work, $13.3 million for site assessments, and $5.0 million for revolving loan fund programs. According to data collected from the states and reported by EPA, EPA and states have made progress in cleaning up releases from underground storage tanks. These data show that of the almost 450,000 releases confirmed as of fiscal year-end 2005, cleanups had been initiated for about 93 percent and completed for about 74 percent. Table 1 shows key tank- related data elements reported by EPA as of September 30, 2005, and provides definitions for those data elements. As cleanups have progressed, methyl tertiary-butyl ether (MTBE)—a gasoline additive designed to reduce emissions and raise octane—has continued to be detected in groundwater used for drinking water supplies. In some cases, MTBE was added to gasoline to fulfill requirements set in the 1990 Clean Air Act Amendments to reduce certain types of emissions. However, because MTBE dissolves easily in water and does not cling to soil very well, it migrates faster and farther through the ground than other gasoline components, thus making it more likely to contaminate public water systems and private drinking water wells. MTBE’s health effects have not been conclusively established, but the federal government has determined it to be a potential human carcinogen. The effects of exposure to MTBE include headaches; eye, nose, and throat irritation; coughs; nausea; dizziness; and disorientation. Low levels of MTBE can make drinking water supplies undrinkable due to its offensive taste and odor. Because of uncertainties about MTBE’s health effects, EPA has not set a national standard for MTBE in drinking water. Some states have set their own limits on allowable levels of MTBE in drinking water, and some have banned its use in gasoline sold in the state. The Congress also took action through the 2005 Act to reduce the use of MTBE in gasoline by eliminating the requirement from the 1990 Clean Air Act Amendments that led to the use of MTBE to reduce emissions. States reported that completing the cleanup of approximately 54,000 known releases from leaking underground storage tanks would likely require substantial amounts of public funds from state and federal resources. The public cost of cleaning up releases from tanks without a viable owner, as well as the number of releases in states’ cleanup backlogs that lack a viable owner, is not fully known. In addition to the costs associated with known releases, states expect that they will use public funds to clean up a substantial number of releases that they identify within the next 5 years. States reported that cleaning up known releases from leaking underground storage tanks would cost an estimated $12 billion in public funds from state and federal sources. This estimate reflects the amount of public funds that states expected it would cost to clean up approximately 54,000 known releases. States were unable to estimate the cost of cleaning up more than another 8,000 releases whose cleanup will require at least some public funds. We asked states to exclude from their estimates any money spent prior to September 30, 2005, to clean up these releases. As figure 1 illustrates, states reported that a substantial amount of the public costs to clean up these releases had not yet been incurred. States reported that nearly half of these releases will require $100,000 or more to fully clean up, with about 5 percent requiring $500,000 or more. Just over half of the approximately 117,000 releases that states reported in our survey had not yet been fully cleaned up will be cleaned up using at least some public funds. Tank owners or operators will pay the entire costs to clean up another 34 percent of these 117,000 releases, according to state officials. States reported that they did not know whether any public funds would be used to clean up most of the remaining 13 percent of these releases or whether tank owners or operators alone would pay for their cleanup. The percentage of releases that states reported would be cleaned up using at least some public funds varied widely by state, as illustrated in figure 2. Some states expected all releases in their backlogs to be cleaned up using at least some public funds, while other states did not expect public funds to be used to clean up any releases in their current backlog. The approach that different states use regarding who pays for the cleanup of leaking underground storage tanks can affect the percentage of releases in a state that are cleaned up using at least some public funding. One such approach is whether a state has chosen to set up a financial assurance fund that provides financial responsibility coverage for tank owners. For example, in North Dakota, where nearly all tanks are covered by the state’s fund, the state expects that more than 95 percent of releases in its current backlog will be cleaned up using at least some public funding. Similarly, state laws addressing when a specific owner or operator is considered to be responsible for a release can affect who pays for cleanups. For example, Michigan program officials previously told us that the state’s causation standard exacerbates the funding problem for tanks without a viable owner because it requires that the state prove that the present owner/operator is responsible for a site’s contamination before it can be held responsible for cleanup. Proving responsibility becomes difficult in cases where releases have occurred in the past and ownership of the property has changed. If responsibility cannot be established, the state must then fund any cleanup of the site. Although EPA estimates that a release costs about $125,000 on average to clean up, the cost can vary based on several factors, including the extent of contamination, the cleanup method selected, and the presence of MTBE or groundwater contamination. In our survey, we asked states about the average cost in public funds to clean up both releases with MTBE contamination and releases that have contaminated groundwater. We also asked for the average cost in public funds to clean up all releases. With regard to releases involving MTBE, EPA has reported that the additional cost for cleaning up these releases varies widely, from no additional cost to a substantial increase, depending on the history of the release. States’ survey responses generally corresponded with this reported variation. Twenty-nine states reported estimates of average public costs for cleaning up releases with MTBE contamination and for all releases. Most of these states reported that cleaning up releases with MTBE contamination costs the same or more than cleaning up an average release in the state. However, estimates of the cost difference varied widely among states. EPA has stated that releases that have contaminated groundwater are generally more complicated and more expensive to clean up than releases that have not. In our survey, 34 states provided us with estimates for the average public cost to clean up all releases as well as the average public cost to clean up releases that have contaminated groundwater. States’ estimates varied widely: about 60 percent of these states reported that it was more expensive to clean up releases involving groundwater contamination, while about 40 percent reported that it cost the same. The full extent of releases from tanks without a viable owner is unknown. While states reported that about 11 percent of the approximately 117,000 releases that have not yet been fully cleaned up came from such tanks, the actual number could be much higher for two reasons. First, 11 states reported that they did not know how many of the releases in their backlogs were from tanks without a viable owner. Second, 17 states reported that there were approximately 4,000 releases from tanks for which they had not yet determined whether a viable owner exists. The public cost of cleaning up releases from tanks without a viable owner is also not fully known. While 26 states and the District of Columbia estimated that it would cost a total of $2.7 billion to complete the cleanups of known releases from tanks without a viable owner, 21 states responded that they did not know the cost, and 2 states did not respond to the question. Because most states reported that they clean up such releases using public funding, it is likely that many of the known releases from tanks without a viable owner will be cleaned up using at least some public money. Nearly all states reported to us in our survey that they use public funding to clean up releases from tanks without a viable owner. Six states reported that they had a state fund dedicated to tanks without a viable owner, and other states without such dedicated funds primarily reported that they use resources from other types of state funds, such as financial assurance funds, or from the federal LUST Trust Fund, as illustrated in figure 3. However, four states reported that they may wait until the property on which the leaking tank is located is purchased and rely on the new owner to clean the site up. States may have releases from tanks without a viable owner in their backlog because the owner or operator responsible for the tank failed to maintain adequate financial responsibility coverage. Maintaining adequate financial responsibility coverage ensures that money will be available to clean up releases from underground storage tanks. This money, in turn, contributes to timely completion of cleanup and thus reduces the risk to human health and the environment posed by releases that are not cleaned up in a timely manner. We asked states about the number of cases they had encountered in the past 5 years in which tank owners did not have adequate financial responsibility coverage. In responding, states used somewhat different definitions of what constituted inadequate financial responsibility coverage. In general, states that we talked with more in-depth about financial responsibility said they counted cases as having inadequate coverage when an owner or operator either (1) had not maintained financial responsibility coverage or (2) had maintained coverage but did not have proof of coverage at the time the state requested it. Twenty-three states reported cases of inadequate coverage in the past 5 years, while only 7 states and the District of Columbia reported no cases; 19 other states reported that they did not know the number of cases involving inadequate coverage. The number of cases involving inadequate financial responsibility coverage may indicate that at least some public funds will be used to clean up a release that otherwise would have been paid for by a responsible party. For example, according to Florida officials, the state has cleaned up about 350 sites annually in past years using public funding. Of these sites, approximately three to four sites per year involved responsible parties that did not maintain adequate financial responsibility coverage. In our survey, Florida estimated an average cost of $380,000 in public funds to fully address each release requiring public funds. Consequently, Florida may have spent more than $1 million per year on such sites in the past. Florida officials noted that the state attempts to recover these funds from the tank owners but indicated that such efforts have not always been successful in the past. Officials in three additional states—New Jersey, Texas, and Utah—also told us that public funds could be used in cases involving inadequate financial responsibility coverage, although they did not know the number of times public funds had been used in these types of situations in the past. Checking financial responsibility coverage—for example, verifying during a site inspection that an owner or operator has the required paperwork to demonstrate coverage—helps to ensure that owners or operators maintain adequate coverage as required by federal law. Some options that owners or operators can choose for coverage either require annual updates or are often renewed annually. For instance, owners or operators that self-insure, or choose to demonstrate that they have sufficient assets to cover costs resulting from a release, must prepare an annual letter with financial information supporting their ability to pay. Similarly, owners or operators that choose private insurance for financial responsibility coverage must pay annual premiums to maintain coverage. However, EPA does not provide states specific guidance on whether or how frequently states should engage in routine verification of financial responsibility coverage. Most states reported to us that they attempted to check financial responsibility coverage on a regular basis, but only about one-third of the states reported that they required annual proof that tank owners or operators were maintaining coverage. The remaining states generally reported that they checked this coverage less often or not at all (see fig. 4). Do not check (7) Other (9) States that do not check financial responsibility coverage on an annual basis may not know if owners or operators are maintaining required coverage. For example, nearly half of the states that do not annually check financial responsibility coverage did not know the number of cases of inadequate coverage that had occurred in their state in the past 5 years. Forty-seven states and the District of Columbia reported that they anticipate identifying about 37,000 releases over the next 5 years. Of these 48 respondents, 43 reported that they expect to spend public funds to clean up a total of about 16,700 of these releases, 2 reported no expected use of public funds, and 3 were uncertain. Thirty-nine of the 43 respondents that expected to spend public funds to clean up future releases also provided estimates of the average public cost to clean up releases in their state. Using these estimates, we determined that the total cost to clean up releases projected to be identified in the next five years in these states could be around $2.5 billion. States also reported that, overall, the proportion of releases cleaned up using public funds is likely to decline in the future. That is, together they anticipate using public funds to clean up a higher percentage of releases in their current backlog than of releases they expect to identify in the next 5 years (see fig. 5). Some states that do not use their financial assurance funds to provide financial responsibility coverage for newly identified releases, such as Florida and Arizona, expect particularly sizable declines. These states expect that owners or operators will use private sources of financial responsibility coverage to pay for the cleanup of most releases identified in the next 5 years. States’ responses also indicate that, together, they expect to identify somewhat fewer releases per year in the next 5 years, on average, than in 2005. Forty-seven states and the District of Columbia provided responses in our survey to questions about releases they identified in 2005 and about new releases they project they will identify in the next 5 years. In 2005, these states confirmed a combined 8,000 releases, compared with their projections of an average of about 7,400 releases per year for the next 5 years. In general, state officials told us that they based their projections on recent trends, although officials in a few states specifically noted that they expected to identify fewer releases in the future in part because of tank and equipment upgrade requirements or other prevention measures. Most states use financial assurance funds to pay for cleaning up releases from underground storage tanks, with most of the revenues coming from state gasoline taxes. In several of these states, financial assurance funds limit the number of cleanups they perform based on funding availability. Under EPA guidance, EPA officials are responsible for determining whether a financial assurance fund is financially sound, that is, if it provides reasonable assurance that funds are available to pay for cleanup costs. The agency recently began collecting information from states to determine the soundness of their financial assurance funds, but this effort has had limited usefulness. Lack of timely cleanup is a concern because the longer pollution from releases is left in place, the greater the potential for it to spread, further placing human health and the environment at risk. States reported that they primarily use financial assurance funds to pay the costs of cleaning up leaks from underground storage tanks. These funds accounted for $1.032 billion, or 96 percent, of the estimated $1.076 billion from all state sources to clean up tank releases in 2005, according to our survey results. State financial assurance funds generally pay for cleaning up releases from tanks whose owners participate in the assurance funds to satisfy federal financial responsibility requirements. Figure 6 shows the state sources of expenditures for cleanup costs in 2005. As shown in figure 6, financial assurance funds can be divided into two types: those that currently provide financial responsibility coverage, and those that used to but no longer do so. Most states have, or have had, financial assurance funds. As of September 30, 2005, 37 states had funds that met federal requirements for financial responsibility, according to EPA; an additional 6 states had such funds in the past but these funds no longer provided coverage for new releases; and 7 states and the District of Columbia have never had financial assurance funds that were approved by EPA to provide financial responsibility coverage (see fig.7). The funds in each of the six states that stopped providing financial responsibility coverage for new releases did so after a certain deadline. Tank owners and operators in these states now demonstrate financial responsibility coverage primarily through private insurance, according to state officials. However, as recently as fiscal year 2005, some of these state funds were still paying out large amounts to clean up releases. In fact, over one-fifth of states’ public spending to clean up releases from underground storage tanks in 2005 came from these financial assurance funds. For example, although Florida’s fund last provided financial responsibility coverage for new releases on December 31, 1998, it is still responsible for cleaning up approximately 12,000 sites, and it spent almost $150 million on cleanups in 2005. Michigan’s fund, however, no longer provided financial responsibility coverage after June 1995 because it had insufficient funds to pay existing and future claims. Michigan state officials reported that several other funds provided cleanup money for underground storage tanks in 2005, including the Clean Michigan Initiative Bond Fund. This fund can be used to pay for many activities, such as waterfront improvements and cleanup of contaminated lake and river sediments. Seven states and the District of Columbia have never had funds that provided financial responsibility coverage. In these states, tank owners and operators use other ways of demonstrating financial responsibility coverage, primarily private insurance and self-insurance, according to our survey. While they never had financial assurance funds, some of these states have provided cleanup funds to address releases from underground storage tanks. In fact, four of these states reported paying for such cleanups from state sources in 2005. Delaware, for example, reported spending $1 million in 2005 from a reimbursement fund for 240 sites. Other states assist or have assisted owners and operators with cleanup by operating insurance-type mechanisms. In the state of Washington, for example, the state’s reinsurance program helps owners and operators of underground storage tanks obtain affordable pollution liability insurance by assuming part of the risk for each loss and insulating the primary insurer from losses greater than a certain amount. In the case of a $1,000,000 policy, for example, Washington’s reinsurance program is responsible for settlements over $75,000. Table 2 summarizes some of the key distinctions among state approaches to ensuring that tanks are cleaned up. At federal fiscal year-end 2005, state financial assurance funds in 39 states held unexpended balances of approximately $1.3 billion, according to estimates reported by state officials. Four states did not have or did not report a fund balance. As shown in figure 8, individual states’ fund balances ranged as high as about $207 million. The top five states—Pennsylvania, Florida, Texas, New Jersey, and California—accounted for more than half of the total balance in these funds. Because many state assurance funds also pay to clean up releases from other types of tanks—such as aboveground storage tanks—the entire balance of any one state’s financial assurance fund may not be available for cleaning up underground storage tanks. Overall, states reported in our survey that financial assurance funds accrued revenues of about $1.4 billion in 2005 from a variety of sources. State taxes on gasoline and other fuels accounted for about $1.3 billion (92 percent) of this income to state funds. Such taxes are generally considered to be paid by the consumer. States reported that financial assurance funds also received revenues totaling about $42 million (3 percent) from fees paid by tank owners and operators. Only four states’ financial assurance funds collected tank fees but not gasoline taxes in 2005, and tank fees were the primary source of revenues for only three of these funds, according to our survey. In addition to gasoline taxes and fees on tanks, states also reported small amounts of revenues from sources such as interest and cost recovery. Whether a state financial assurance fund collects revenue from various sources in a given year can depend on its balance. Many states have maximum limits on the overall balance of their funds. Generally, if a state’s maximum limit is reached, its fund ceases collecting revenues from one or more of its revenue sources until the fund balance drops below a minimum threshold. For example, Idaho has not collected certain fees for its fund since 1998 when the balance of its fund exceeded $30 million, according to a state official. The fund will again begin collecting revenue from these fees once its balance drops to $15 million. While state financial assurance funds can provide substantial amounts of funding for cleaning up releases, funds in some states may not have sufficient resources to ensure that these cleanups are performed in a timely manner. Specifically, officials in nine states reported in our survey that their funds limit the amount of cleanup work they finance based on funding availability. The situation of three such funds, as described by state officials, follows: North Carolina. The revenues to the state’s financial assurance fund have not been sufficient in recent years to address all of the fund’s high- risk sites. As of February 2006, the state was only authorizing cleanup work that the fund could pay for within 90 days. South Carolina. Officials generally preapprove cleanup work only at the sites where contamination is most severe. After an initial assessment of each site’s contamination, the state categorizes releases into one of four categories. The most urgent category includes releases deemed emergencies, all of which were being actively cleaned up as of August 2006. The remaining categories are ranked based on how soon the release is likely to affect human health and the environment, as well as its impact on groundwater. As of August 2006, less than 40 percent of the releases in these categories were being actively cleaned up. Florida. Although the state’s financial assurance fund stopped providing financial responsibility coverage for new releases in 1998, it is having difficulty paying for all of its cleanups. The state fund is only able to actively conduct cleanup work at about one-third of the 12,000 remaining sites. The approximately 8,000 other sites in the fund’s backlog await cleanup. These cleanups will not occur until money becomes available for them, or, potentially, the risk posed by the sites increases so much that they require more urgent cleanup. The timeliness of cleanups of releases from underground storage tanks is especially important because the longer contamination from these releases is left in place, the greater the potential becomes for the contamination to spread. The farther these contaminants are allowed to spread, the greater the chance becomes that they will contaminate drinking water and other sensitive resources, potentially putting human health and the environment at risk. In addition, when state financial assurance funds do not pay cleanup claims on a timely basis, tank owners and operators may delay cleanups. For example, tank owners may be less likely to voluntarily report releases if they know that reporting a release could lead to a mandatory cleanup and believe that they will not be reimbursed by the state financial assurance fund for performing that cleanup for an extended period, according to an EPA regional official. To ensure that funds are available to clean up releases in a timely and appropriate manner, state financial assurance funds must be financially sound. According to EPA guidance on the subject issued in 1993, a state assurance fund is financially sound if it provides reasonable assurance that money is available to pay for cleanup costs and other liabilities. “Reasonable assurance,” according to EPA, would be evident, for instance, if the fund assets are greater than liabilities or there are sufficient resources to meet current demands, that is, the normal timing of payment of claims is not significantly delaying cleanups. If funding levels or claim processing time has a negative impact on the cleanup of releases from underground storage tanks (i.e., causing undue delays in cleaning up releases that therefore harm human health and the environment), then EPA would be concerned about the financial soundness of the fund. State financial assurance funds may face additional challenges to remaining financially sound in coming years for the following reasons: Financial assurance funds may take on additional liability from tank installers and manufacturers. Effective February 2007, the 2005 Act directs EPA to require that each state receiving federal funds either implement “secondary containment” for new and replaced underground storage tanks located near sources of drinking water or to require evidence of financial responsibility for tank manufacturers and installers. In selecting the second option, states must require that any manufacturer or installer of an underground storage tank maintain evidence of financial responsibility coverage. In some cases, tank installers and manufacturers may turn to state financial assurance funds for financial responsibility coverage. While a few state funds already provide such coverage to installers, this additional liability could strain the resources of some states’ funds, according to a senior official in EPA’s Office of Underground Storage Tanks. Some states may discover more releases in the coming years than in past years. The 2005 Act requires each state receiving federal funding to inspect all of their underground storage tanks on a 3-year cycle, beginning after the state inspects tanks that have not been inspected since December 1998. For those states that currently inspect sites less frequently, the additional inspections, while intended to prevent leaks in the long term, could lead to a spike in the number of releases discovered. For example, officials in Texas told us that underground storage tank facilities in their state are inspected about every 10 years, on average. Releases in this state that may not have otherwise been found for as long as 10 years may be discovered much sooner, leading to an increase in confirmed releases over the next few years. Finding these releases sooner may mean that the contamination would be less extensive, however, and therefore any cleanup required would be less costly. State financial assurance funds may also be affected by future natural disasters, such as hurricanes Katrina and Rita in 2005. As late as June 2006, the impact of the flooding caused by these hurricanes on Louisiana’s financial assurance fund was not yet fully known, according to a state official. If all underground storage tanks that could have been affected by the flooding turn out to have had releases, the workload of the state assurance fund would increase by 25 percent. Payouts from the state assurance fund would increase by approximately $4 million per year, according to this official. Diversions from state financial assurance funds may also limit some states’ ability to pay for cleanups under certain circumstances. States may sometimes decide to withdraw or withhold money from state financial assurance funds. Of the 43 states that have had financial assurance funds, 16 reported in our survey that they had diverted a total of nearly $435 million from their funds between 2001 and 2005. Officials in most of these states reported that the diverted amounts went to the state’s general fund or to offset state budget shortfalls. A few states reported using diverted funds for specific programs, such as Brownfields grants and loans, a lead- based paint removal program, and cleanup of groundwater contamination caused by sources other than leaking storage tanks. Officials we interviewed in two states where diversions occurred—Florida and South Carolina—reported a negative impact on the state program’s ability to clean up sites. In Florida, for example, $20 million was diverted in 2002. As a result, financial assurance fund managers had to adjust the threshold for cleanup, meaning that the cleanup of less urgent releases, which otherwise would have been addressed, was delayed, according to state officials. Officials we interviewed in two other states—Pennsylvania and New Jersey—did not believe that diversions had caused a significant negative impact, if any. In the largest case of a diversion reported to us, for example, the Pennsylvania financial assurance fund loaned the legislature $100 million in 2002 to balance the state’s budget, according to state officials. State officials reported that this diversion did not impact the fund’s operations, however, because the fund still had more than enough money to meet its current expenses. Figure 9 shows the number of states with financial assurance funds that reported to us that they had experienced a diversion between 2001 and 2005. The 2005 Act included language providing that, if a state diverts resources from its financial assurance fund, EPA may not distribute a certain portion of LUST appropriations to that state for enforcement purposes. This provision affects only the 37 states whose financial assurance funds still provide financial responsibility coverage for new releases. Officials we interviewed in Pennsylvania and South Carolina regarding this issue were uncertain about the 2005 Act’s impact on future diversions in those states. A South Carolina official, for example, believed that the provisions could discourage the state from making relatively small diversions from the financial assurance fund because the loss of federal funding would more than offset the gain from the diversion. If the state needed to divert a large amount of money, even relative to the $1.3 million overall distribution it received from EPA in 2005, the disincentive would not be as significant. Two states even commented in their responses to our survey that they anticipated diversions in 2006. EPA had not developed guidance to implement these provisions of the 2005 Act as of December 2006. Concerns have been raised about whether tank owners have incentives to prevent releases from their tanks when they can rely on state financial assurance funds to pay the bulk of the cleanup costs. Although EPA estimates that releases cost about $125,000 to clean up, on average, most state financial assurance funds charge a deductible of $25,000 or less, according to our survey. Twelve states described the circumstances under which penalties could be imposed on tank owners for multiple, or repeated, releases, in their survey responses. Officials in several states indicated that penalties were not usually imposed simply if multiple releases occurred. Rather, most states imposed penalties on the basis of evidence that the tank owner did not comply with applicable regulations or failed to report a release. For example, New Hampshire officials indicated that, while the state has authority for administrative fines and civil penalties in cases involving multiple releases, such actions are not automatically imposed. Instead, fines and penalties may be assessed if a second release results from a tank owner’s recalcitrance in achieving and maintaining compliance with operational regulations. Of the 20 states that provided comments in response to our survey question regarding multiple releases, none indicated that increased penalties were imposed simply based on the occurrence of a second release. EPA approves state financial assurance funds to provide financial responsibility coverage. According to EPA guidance, the agency can withdraw this approval if a fund no longer provides coverage that ensures timely and adequate cleanup of releases. As discussed earlier, EPA would be concerned about the soundness of a state financial assurance fund if the funding levels or claim processing time caused undue delays in cleaning up releases, thereby potentially harming human health and the environment. In Texas, for example, claims substantially exceeded revenues during the early years of the state’s financial assurance fund. By 1992, the fund had a backlog of unpaid bills totaling about $170 million. This amount exceeded the fund’s annual income by approximately 300 percent, and new claims arriving daily added to the backlog. In order to catch up, the fund temporarily slowed down some cleanup work. Even though the fund stopped accepting new releases after December 22, 1998, state officials expect it to remain in place at least until 2008. EPA monitors the soundness of state financial assurance funds; this task is carried out by EPA regions, according to the agency’s 1993 guidance on the subject. This guidance suggested several steps that regions could use to adequately monitor fund soundness, including (1) collecting baseline data on relevant fund soundness measures from each state, (2) evaluating the baseline soundness of each state fund, and (3) monitoring these fund soundness measures over time to check for developing problems. The guidance also specified that monitoring state funds should be accomplished as part of regions’ routine oversight of state programs. Officials in four of the six EPA regions we interviewed conducted fund soundness oversight primarily by discussing the financial position of the state assurance fund with relevant state officials. In 2005, EPA’s Office of Underground Storage Tanks began collecting information from states on various aspects of their financial assurance funds. The goals of this effort included providing a better tool to monitor state financial assurance funds’ soundness and helping EPA work with states to resolve any soundness issues. According to some EPA officials, however, the data collected were of limited usefulness. One region’s program manager did not expect that the agency’s effort would provide any new information, except data to document what he already knew. An EPA headquarters official, who is closely involved with the effort, agreed that the agency’s information collection, at most, helped confirm what regions already saw as problem states. Moreover, states provided data with gaps or further clarifications needed in key areas, such as the number of release sites awaiting funding and the estimated total liabilities for underground storage tanks. EPA regional officials described this first effort as a collection of baseline information, and the agency decided to collect data again in 2006 without changing its method. Results for 2006 were not available as of December 2006. The 2005 Act also included language providing that EPA may withdraw approval of a state fund for financial responsibility coverage without withdrawing approval of the overall state underground storage tank program. In response, EPA has formed a workgroup to examine the issue of how to assess the soundness of state financial assurance funds and to develop criteria for guidance on the conditions under which it might withdraw fund approval, including what would constitute a lack of financial soundness. The guidance had not been made final as of December 2006. Annual appropriations from the LUST Trust Fund have averaged about $71 million in recent years. Typically, about 80 percent of the money is distributed to the states to support their cleanup programs. LUST Trust Fund money provided to states generally represents a small portion of the individual states’ cleanup program budgets. In fiscal year 2005, the states used about two-thirds of their distributions to fund program administration and enforcement activities and one-third to fund the cleanup of sites. Appropriations from the LUST Trust Fund have been relatively stable since fiscal year 1998. Between fiscal years 1998 and 2005, annual appropriations from the trust fund have ranged from about $65 million to $76 million per year, averaging about $71 million per year. Over this period, EPA distributed an average of about 80 percent of the annual appropriations to states to support their cleanup programs. EPA uses the balance of the annual appropriations to support cleanup activities on Indian lands and its own cleanup-related activities. Forty-eight states reported spending about $15 million in LUST Trust Fund money on site cleanup activities in 2005, by far the largest single source of federal money for this purpose reported in our survey. Figure 10 shows the level of appropriations from the LUST Trust Fund since it began operations. As the figure shows, annual appropriations from the trust fund varied considerably in the first 10 years of the program. Financed by a $0.001/gallon excise tax on gasoline and other motor fuels and the interest that accrues to the fund balance annually, the balance of the LUST Trust Fund had grown to about $2.5 billion by fiscal year-end 2005. The tax has been in effect continuously since 1987, except for a short period in 1990 and the period between December 31, 1995, and October 1, 1997, when the tax had expired. Since 1987, the fund balance has been growing at an average rate of about $129 million per year. By fiscal year-end 2005, the LUST Trust Fund had collected about $3.7 billion in revenue while appropriations totaled about $1.2 billion, leaving a fund balance of approximately $2.5 billion. Figure 11 shows the changes in the trust fund balance from 1987 through 2005. From the inception of the fund through fiscal year 2005, net tax revenue to the LUST Trust Fund has averaged about $144 million per year, with interest from investments adding an average of $49 million. Net revenues in fiscal years 2001 and 2005 also included relatively small amounts expended from the fund by EPA and subsequently recovered from the parties responsible for the contamination and redeposited to the fund (see table 3). States’ revenues from the LUST Trust Fund’s annual appropriations represent a relatively small part of many states’ cleanup program revenue in any given year. In fiscal year 2005, EPA distributed about $58 million of LUST Trust Fund money to the states, or about $1.2 million per state. State programs spent much more than this on the cleanup portion of their programs alone. In fact, 45 states each reported spending an average of $24 million in 2005 to clean up contamination from leaking tanks. LUST Trust Fund money used for cleanup work is generally intended to pay for cleaning up releases from tanks without a viable owner. Even when examining only this aspect of the cleanup effort, nine states reported spending amounts that far exceeded their LUST Trust Fund distribution— more than $2 million each in 2005 alone to clean up contamination from leaking tanks without a viable owner. As discussed earlier in this report, the cleanup work from such tanks that remains to be done is significant. Distributing the annual LUST Trust Fund appropriation among the states is a two-step process. First, EPA headquarters uses a formula to determine the amount each state should receive and then divides the money among the regions based on the total for the states within each region. Second, EPA regional officials consider the components of the state formulas, along with additional factors, to determine the actual amount to be distributed to each of the states in their region. Additional factors that may be considered, according to EPA regional officials, include states’ actual need for money in light of such things as funding carryovers from prior years, states’ work plans, or any special projects. For the most part, the EPA regional officials whom we interviewed stated that deviations from the formula distributions, when they occur, are usually relatively minor. The formula EPA headquarters uses to distribute LUST Trust Fund money to the regions incorporates three components: (1) a minimum distribution of $300,000 per state; (2) a need-based amount that considers the numbers of underground storage tanks and releases in the state, as well as the percentage of the population relying on groundwater for potable water; and (3) a performance-based bonus to states that meet or exceed the national averages for the numbers of cleanups initiated and completed. The distribution formula does not consider the number of releases from tanks without a viable owner in various states, nor does it consider the risk that specific releases may pose to human health and the environment. EPA develops information on states’ needs and performance from states’ semiannual activity reports on their tank numbers and cleanup activities. However, our survey disclosed several concerns regarding the accuracy of these reports, including the following: According to officials in two states, the information they report in the semiannual activity reports is based on estimates rather than actual performance. For example, a Maine official told us that the data the state reports is generated by canvassing their regional staff, and the state has found errors in the data reported in the past. A Wyoming official told us that the state tracks contaminated sites rather than releases. Because EPA reports call for data on releases rather than sites, Wyoming provides its best estimate of release data. Some of the reporting problems disclosed in our survey are related to the definitions of the reporting elements. An Arizona program official told us, for example, that the program was uncertain of EPA’s definition of the cleanups initiated performance measure. The state official expressed concern that the state’s definition of cleanups initiated may not agree with the EPA definition. The state reports a site as “cleanup initiated” when the release has been confirmed and a case number assigned. According to the EPA definition, however, cleanup initiated requires that the state or responsible party has evaluated the site and initiated physical activity (e.g., removal or treatment of the contamination, removal of the contaminated soil, or monitoring of the groundwater or soil being remediated). Cleanup initiated should also be reported in situations where the state has evaluated the site and determined that no physical activity is necessary to protect human health and the environment. A Louisiana Underground Storage Tank Program official told us that a review of its files disclosed that the program had been reporting duplicate entries and releases that did not meet EPA’s definition. An Oregon Underground Storage Tank Program Coordinator told us that, in the process of cleaning up its database information, program officials found many sites being reported that were duplicates or involved releases that did not come from regulated tanks. A Maryland Department of the Environment official acknowledged that it has been reporting semiannual performance data incorrectly, and as a result, some of the state’s performance activities have been double counted. An Oklahoma Petroleum Storage Tank Division official told us that the division had been reporting performance data on all tanks regulated by the state, including the aboveground storage tanks, and also undercounting the number of “active tanks” by excluding tanks that were only temporarily out of service. To help ensure the accuracy of the states’ semiannual activities reports, EPA recommends that the regions review each state’s data submission for reasonableness based on the state’s prior reports and the regional program manager’s knowledge of the state’s program. When any of the states’ data appear questionable, the regions are asked to follow up with the states to obtain an explanation or corrected data. Our interviews with EPA regional officials indicated that they were generally following this headquarters guidance. Nevertheless, in some cases regional officials were not aware of reporting problems with the states in their regions that our survey disclosed. To ensure that states properly understand EPA’s definition of the data reporting elements, at least one EPA region reminds its states of the EPA definitions each time a semiannual activity report is due. Other regions we interviewed were less proactive, essentially relying on informal discussions, the experience of the state officials, or the posting of the definitions on the EPA Web site. EPA also aggregates elements of the states’ semiannual activity reports to measure program performance against the national goals it establishes in accordance with the Government Performance and Results Act. For fiscal year 2005, EPA’s goals for the underground storage tank program included (1) completing 14,500 cleanups, (2) completing 30 cleanups in Indian Country, and (3) decreasing newly reported confirmed releases to fewer than 10,000. On the basis of the states’ reports for fiscal year 2005, EPA reported that all the goals were met. Money from the LUST Trust Fund is meant, in part, to address releases from tanks without a viable owner. In a November 2005 report, we recommended that EPA collect available information from states, in their reports to the agency, regarding the number and cleanup status of all known abandoned underground storage tanks within their boundaries. This information would improve EPA’s ability to determine how to most efficiently and effectively distribute LUST Trust Fund dollars to the states. Although 37 states and the District of Columbia reported numbers of releases that came from tanks without a viable owner in our survey, as of December 2006, EPA Office of Underground Storage Tanks officials stated that EPA had not yet required states to report this information because of concerns regarding the burden this might place on some states. Under cooperative agreements with EPA, states receive distributions from the LUST Trust Fund to help cover the cost of administering their LUST cleanup programs. According to EPA regional officials, the states’ programs are all set up differently and, under EPA guidelines, the states can decide how they will best use the LUST Trust Fund money to fit their particular program. According to EPA, over the past 10 years, on average, states have used roughly one-third of their LUST Trust Fund money for each of the following categories: (1) administrative activities, including LUST Trust Fund program management, general management and administrative support, program guidance and implementation, and training; (2) enforcement activities, including all actions necessary to identify a leaking underground storage tank site’s potentially responsible party; issuance of letters, notices, and orders to the responsible parties; oversight of the cleanups; and activities associated with cost recovery actions; and (3) cleanup activities consisting largely of emergency responses, site investigations, exposure assessments, and corrective actions. In fiscal year 2005, most of the states reported spending at least some of their LUST Trust Fund money in all three categories. However, some states focused their spending on just one or two categories. For example, 10 states reported they did not spend any of their LUST Trust Fund money on cleanup activities in fiscal year 2005. Figure 12 shows the states’ use of LUST Trust Fund money by spending category. Regional officials told us that many states prefer to use their LUST Trust Fund money to fund staff positions rather than cleanups. For example, according to an EPA Region 5 official, although states in their region initially used their LUST Trust Fund money to perform cleanups, they soon decided that funding staff positions was more cost effective than performing the cleanup and pursuing cost recovery, which can be an expensive and time-consuming process. By funding additional staff positions rather than cleanup activities, states were often able to identify the responsible parties and force them to do the cleanups, thereby avoiding the time and expense of pursuing cost-recovery actions. A Region 6 official told us that some states view the cost recovery process as a deterrent to using the federal money for cleanup activities. Because they have state money available for cleanup efforts, states can use the federal money for staff salaries. Region 8 officials noted that some states actually require the use of state money for cleanup, and thus the federal money is used for administrative or enforcement activities, particularly salaries. An EPA official in Region 4, however, took issue with states that do not use LUST Trust Fund money for cleanups. The EPA official stated that, in some cases, cleanups that could have been performed with the LUST Trust Fund money are not being undertaken because the money is being used for salaries. The official told us that Region 4 encourages states to fund salaries with state money so that LUST Trust Fund money can be used for cleanups; however, the official acknowledged that ultimately it is up to the states to decide how to use these federal funds, within the permitted parameters. EPA generally relies on states to ensure that tank owners and operators comply with federal financial responsibility regulations, but it does not provide specific guidance to the states as to whether or how frequently they should verify financial responsibility coverage. As a result, states verify coverage according to differing schedules or not at all. Therefore, EPA lacks assurance that states are adequately overseeing and enforcing financial responsibility provisions. We found that only about one-third of states check coverage on an annual basis, while the remaining states generally reported they check less frequently or not at all. Additionally, many states could not provide information on the extent of inadequate financial responsibility in their states for the past 5 years. If states do not verify coverage on a routine basis, it may be difficult for them to know whether owners or operators will have the required coverage in the event of a release. If the required coverage is not in place when a release occurs, funds may not be available to pay for cleanup in a timely manner, thus increasing the potential for contamination to spread and damage the environment and human health. Additionally, a lack of available funds may result in taxpayers paying more of the cleanup costs than they would have otherwise paid. In addition, EPA’s method of monitoring whether state financial assurance funds provide adequate financial responsibility coverage has limitations. Unless EPA improves how it monitors the soundness of state financial assurance funds, it will not be aware of deficiencies in coverage before they occur or in sufficient time to take action to avoid funding shortages, which could delay the cleanup of releases and potentially threaten human health and the environment. Under the principle of “the polluter pays,” tank owners and operators are primarily responsible for the costs of cleaning up contamination from their leaking tanks. RCRA requires these owners and operators to obtain some form of financial responsibility coverage to demonstrate that they have access to resources to cover cleanup costs. In response, many states developed financial assurance funds, at least in part to ensure that releases are cleaned up in a timely manner. In the event of a release, tank owners covered by these funds usually pay a relatively small deductible, while the funds provide sometimes large sums of public funding to complete the required cleanup. Because these deductibles are often small, they may not provide an incentive for tank owners to prevent releases from occurring. In addition, in many states, tank owners are using financial responsibility mechanisms other than state assurance funds. While some state funds are currently encountering difficulties paying for cleanups in a timely manner, tank owners in many states will increasingly rely on other means of financial responsibility coverage, making it important to know whether state funds or private forms of coverage are more effective in ensuring timely cleanups. EPA is ideally situated, through its existing relationship with state program officials throughout the country, to shed light on this issue. EPA’s distribution of LUST Trust Fund money to states depends on data that may not be accurate. In addition, states are not required to report data to EPA on the number of releases from tanks without a viable owner. Although one of the purposes of the fund is to help states clean up releases from tanks without a viable owner, EPA currently allocates resources to the states without taking into account the number of such releases in each state. In our November 2005 report, we recommended that EPA collect available data from the states regarding the number of tanks in each state that had no viable owners. In commenting on this recommendation, EPA expressed concern about placing an undue burden on states. In our response, we explained that we were not suggesting that states should try to identify new sites that they were not currently aware of, but merely report on sites without viable owners separately from the aggregated data that they already provided to EPA. We continue to believe that such reporting would be worthwhile and would not present an undue burden to most states. In our survey, 37 states and the District of Columbia reported data on the number of tanks without viable owners that had known releases. Taking this information into account in distributing LUST Trust Fund money could encourage the remaining states to gather such information as well. In addition, developing national data on the extent to which releases remaining to be cleaned up are attributed to tanks without viable owners would be useful to both EPA and the Congress in assessing the future public funding needs for EPA’s UST program. We recommend that the Administrator, EPA, take the following four actions: Ensure that states verify, on a regular basis, that tank owners and operators are maintaining adequate financial responsibility coverage, as required by RCRA; Improve the agency’s oversight of the solvency of state assurance funds to ensure that they continue to provide reliable financial responsibility coverage for tank owners; Assess, in coordination with the states, the relative effectiveness of public and private options for financial responsibility coverage to ensure that they provide timely funding for the cleanup of releases; and Better focus how EPA distributes program resources to states, including LUST Trust Fund money, by ensuring that states are reporting information in their semiannual activity reports that is consistent with EPA’s definitions; encouraging states to review their databases to ensure that only data on the appropriate universe of underground storage tanks are being reported in their semiannual activity reports; and gathering available information from states on releases attributed to tanks without a viable owner and taking this information into account in distributing LUST Trust Fund money to states. We provided EPA with a draft of this report for its review and comment. EPA agreed with our recommendations and provided information on the agency’s plans and activities to address each of them. Regarding our recommendation that EPA ensure that states verify that tank owners and operators maintain adequate financial responsibility coverage, the agency indicated that it has issued draft guidelines that would require inspections of underground storage tanks to assess compliance with financial responsibility requirements. Regarding our recommendation that EPA improve its oversight of the solvency of state assurance funds, the agency indicated that it would strengthen its oversight by improving a recently developed monitoring tool and by developing guidance for its oversight process. Regarding our recommendation that EPA assess the relative effectiveness of public and private options for financial responsibility coverage, the agency indicated that it would consider conducting such a study in conjunction with the states. Finally, regarding our recommendation to better focus how EPA distributes program resources to states, the agency stated that it would work toward ensuring that state- reported data are consistent with existing EPA definitions and are limited to federally regulated underground storage tanks. Also, EPA stated that it would consider changes to improve the distribution of future LUST Trust Fund money. EPA’s letter commenting on our report is included as appendix IV. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the EPA Administrator and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. The objectives of this review were to provide information on (1) states’ estimates of the cost in public funding from state and federal sources to clean up known releases from underground storage tanks, (2) states’ primary sources of funding for addressing these releases and their future viability, and (3) the funding available from federal sources to address these releases. For the purposes of this review, we defined public funding as including any money controlled and/or provided by state and federal agencies—for example, money from the federal Leaking Underground Storage Tank (LUST) Trust Fund, state financial assurance funds, other state funds that have not been approved by the Environmental Protection Agency (EPA) to provide financial responsibility coverage, or money appropriated by the state to pay for cleanup that would not otherwise occur. Our definition excluded money spent by federal, state, and local government agencies to clean up releases from tanks they either own or operate—this money would be considered to be provided by the responsible party. To address our objectives, we developed and administered a survey to state officials responsible for underground storage tank programs or, where applicable, for state cleanup funds, in the 50 states and the District of Columbia. Specifically, we prepared and e-mailed a Word Electronic Questionnaire to obtain data, whether estimated or actual, from states on underground storage tanks, including the number of tanks, releases, and cleanups, and financial responsibility and funding sources for cleanups. The practical difficulties of conducting any survey may introduce nonsampling error. For example, differences in how a particular question is interpreted, the sources of information available to respondents, or the types of people who do not respond can introduce unwanted variability into the survey results. We included steps in both the data collection and data analysis stages to minimize such nonsampling errors. For example, in the research design and data collection stages, we took the following steps: We obtained from EPA its list of state contacts. We then attempted to contact all listed state officials via e-mail and asked them to tell us whether they or someone else in their state would be the most appropriate contact. Upon receiving this e-mail, a few officials identified more appropriate survey respondents. In states where we were not able to contact the officials on EPA’s list via e-mail, we sent the e-mail regarding the most appropriate contact to officials named on lists of state contacts from two other professional organizations that also conduct surveys about underground storage tanks—the New England Interstate Water Pollution Control Commission and the Association of State and Territorial Solid Waste Management Officials—and, in some cases, to officials listed on states’ underground storage tank-related Web sites. We pretested the survey with officials from four states between October 28, 2005, and December 21, 2005, and used their feedback to refine the survey. States were selected for the pretests to ensure variation in size of workload and status of the state financial assurance fund. For these pretests, we sent agency officials a draft of the survey. We then interviewed the officials to ensure that the (1) questions were clear and unambiguous; (2) terms used were precise, including our definition of public funding; and (3) data needed to respond to the questions was available to the state officials. As a result of our pretests, we made changes to some of the survey questions. We sent an announcement on November 18, 2005, of the upcoming survey to state contacts (including the District of Columbia) and then e- mailed the survey as an attachment on January 19, 2006. We asked respondents to return the survey by e-mail, fax, or mail by February 3, 2006. We accepted responses to the survey through mid-October 2006. We sent e-mail reminders and conducted follow-up telephone calls with nonrespondents. To minimize nonsampling error in the data analysis stage, we took the following steps: For selected survey questions, where we were able to, we independently corroborated survey data by comparing these data with EPA data. We then followed up with states and EPA as needed about discrepancies. We included a series of data reliability questions in the survey to assess the accuracy of the information provided to us by the respondents. Specifically, we collected information about (1) the databases states used to provide survey data; (2) the internal controls on those databases (e.g., whether it had been reviewed for quality, the procedures to ensure accurate data entry, and known limitations); (3) whether the data provided were actual or estimates; and (4) the assumptions, data, and calculations used to provide the actual or estimated data for selected questions. We also requested supporting documentation if states noted their database(s) had been reviewed for quality. We used a data collection instrument (DCI) to systematically and consistently record all available data reliability information (from survey responses, published reports, or interviews) in order to make assessments of the reliability of the survey data provided by each state. The DCI was then reviewed by an independent person who assessed the accuracy of DCI data entries and the reasonableness of the judgments on the reliability of states’ survey data. As expected, there was wide variability in the level of oversight of the databases states use to track underground storage tank information. There was similar variability in the ways state officials described how they arrived at responses to certain questions—whether it was based on states’ “actual” data, derived estimates, or some other source. Specifically, some states provided explanations for their responses that were precise and grounded in reasonable mathematical or trend based assumptions, while others noted that their responses were educated guesses. Given the limitations in the information reported from some states, we determined that the survey data are not comparable by state, nor should they be reported using such terms as “actual” sums, budgets, or outlays. Consequently, the data are presented in the body of the report as aggregate information on what states estimate their underground storage tank and leaking underground storage tank numbers and funding to be. We report all states’ responses to selected questions only in appendix II, because of the known limitations in the reliability of state level comparisons. Such data are included in appendix II because of congressional request and in order to illustrate the range of responses states provided to selected questions and the wide variance in the reliability of those responses. With these provisos, the survey data are sufficiently reliable as they are used in the body of the report (i.e., to be presented in aggregate, as testimonial evidence). The survey data presented in appendix II are not reliable for state level comparisons. We conducted interviews regarding data reliability with a nonprobability sample of seven states (see next page for a further discussion of this sample of states). These interviews included in-depth questions focusing on topics such as the states’ database reviews and database limitations identified in their survey responses. We contacted state officials to clarify survey responses when necessary and used a centralized tracking document to record all changes. Changes made in the tracking document were verified against the keypunched data to ensure all changes and updates were recorded. When changes took place after a survey was keypunched, the updates were made in the computer program used to generate survey results. We edit-checked all surveys before they were keypunched, verified all keypunched survey data against hard copies of the surveys, and verified the computer programs used to generate survey results. From the population of 51 state contacts who were asked to participate in our survey, we received 50 questionnaires for an overall response rate of 98 percent. We did not receive a questionnaire from South Dakota. We do not know if responses for South Dakota would have differed materially from those of the states that completed the survey. From the responses we received, we gathered information about (1) state databases used to track underground storage tank information; (2) states’ data for underground storage tank management, including data regarding active and closed tanks, confirmed releases, and cleanups initiated and completed; (3) states’ sources of money for cleanup, including state financial assurance funds; (4) states’ use of federal money to clean up leaking underground storage tanks; and (5) financial responsibility. We provided states with the definition of public funding described previously in this section, and we asked them to respond to all survey questions about such funding according to this definition. The survey was focused specifically on federally regulated underground storage tanks, as defined by EPA. A few states reported they were not able to provide us with data specific to federally regulated underground storage tanks for selected questions, and instead, they generally provided us with either data including a different universe of tanks or data prorated based on the number of federally regulated underground storage tanks in the state for these questions. Additionally, most survey questions that asked for data for a specific year referred to the federal fiscal year. If states were unable to provide data for the federal fiscal year, we asked them to provide us with the starting date of their alternative reporting year. As a result, we present such data in the report as 2005 data. In addition to conducting a survey to address our three objectives, we also interviewed agency officials in a nonprobability sample of eight states— Florida, Iowa, New Jersey, Ohio, Pennsylvania, South Carolina, Texas, and Utah—to gather additional information regarding selected survey topics. Specifically, we talked with this group of states about topics such as their use of LUST Trust Fund money, restrictions within their state financial assurance fund on accepting claims, diversions from their state financial assurance funds, the process of phasing out their state financial assurance funds, and cases of inadequate financial responsibility coverage in their state. We selected this sample of states in order to discuss as many of our topics of interest as possible within a limited number of interviews. To select the states, we first reviewed all states’ responses to survey questions related to the relevant topics to determine which states would be able to discuss each topic of interest. We then calculated a score for each state based on the number of relevant topics they could discuss, as indicated by their survey responses. We interviewed all states that had scores at or above a threshold score that we determined, based on how many states we would need to discuss the relevant topics with to obtain sufficient information for the purposes of this report. We also conducted interviews with regional program officials from EPA’s Underground Storage Tank Program in six EPA regions to gather additional information about (1) states’ primary sources of money for addressing releases from leaking underground storage tanks, (2) these sources’ future viability, and (3) the federal funding available to address these releases. We selected these regions primarily because survey responses from one or more states in these regions raised questions about similar data they had reported to EPA. We spoke with regional officials about these apparent discrepancies, as well as about the regions’ processes for distributing money from the LUST Trust Fund, states’ use of LUST Trust Fund money, and the solvency of states’ financial assurance funds. To obtain further information about the federal funding to address these releases, we interviewed Department of the Treasury officials responsible for managing the LUST Trust Fund, interviewed EPA headquarters and regional officials to determine the process by which EPA distributes LUST Trust Fund money, and gathered documentation regarding appropriations of money from the fund to EPA, states’ expenditure of fund money, and the balance of the fund and its annual revenues. The documentation we gathered included (1) annual apportionment letters, which we used to track appropriations of LUST Trust Fund money to EPA; (2) EPA Spending Reports, which we used to track state expenditures of LUST Trust Fund money; and (3) Treasury’s LUST Trust Fund Financial Statements, which we used to track the fund balance and revenues collected into the fund. We selected these sources based on EPA officials’ indications that they were the most appropriate sources for the purposes of this report. For Treasury’s LUST Trust Fund Financial Statements, we obtained and reviewed relevant documentation on their reliability, including copies of audits of Treasury’s financial statements and internal controls. These audits were conducted in accordance with generally accepted government auditing standards. We also discussed the reliability of Treasury’s LUST Trust Fund data with knowledgeable EPA and Treasury officials. We found the data elements that we used in this report from Treasury’s financial statements sufficiently reliable for the purposes of this review. We conducted our work from June 2005 to December 2006 in accordance with generally accepted government auditing standards. As described in appendix I, our assessment of the reliability of the data provided by states in their surveys found wide variability in the level of oversight of the databases that states use to track underground storage tank information, and similar variability in the ways state officials described how they arrived at responses to certain questions. For the purposes of this report, we have divided states into three relative categories, according to our assessment of the reliability of their survey responses: (1) 17 states generally reported having fairly recent data quality reviews, several internal controls on the data, no significant data quality problems, and provided fairly precise and mathematically grounded explanations for their calculations; (2) 26 states and the District of Columbia generally reported having some internal controls on the data, and/or some data quality problems, and/or provided a mix of “guesses” and fairly precise explanations of their calculations; and (3) 6 states generally reported having few, if any internal controls on the data, and/or significant data quality problems, and/or didn’t provide explanations for their calculations or reported that they were guesses. In tables 5, 6, 8, 9, and 10 in this appendix, we have identified the states that fall into each category. Overall, given the limitations in the information reported from some states, data reported by states and presented in this appendix should not be used to compare state programs. GAO SURVEY OF THE 50 STATES: FINANCING CLEANUPS OF The U.S. Government Accountability Office (GAO) is an agency of the legislative branch that reviews federal programs on behalf of the U.S. Congress. To aid in our continuing reviews of the Environmental Protection Agency’s (EPA) Underground Storage Tank program, we are currently surveying the 50 states as part of a study of how states finance the cleanup of leaking underground storage tank (LUST) sites. We will use the information gathered in this survey to provide the Congress with information about the magnitude of LUST cleanup costs across all 50 states and the resources available to state programs to address these cleanups. We are aware of similar survey efforts conducted in the past year by the Vermont Department of Environmental Conservation and by EPA’s Office of Underground Storage Tanks. We have discussed our survey with these parties and eliminated overlap where feasible. Your prompt response to this survey is very important. Without your state’s response, we will not be able to accurately report to the Congress on the magnitude of LUST cleanup costs across all 50 states, how states are financing these cleanups, and the resources the states have to fund these cleanups. Your prompt participation will help us avoid costly follow-ups. To answer some of our questions, you may need to coordinate your responses with other state agencies responsible for certain aspects of the program. This questionnaire can be filled out using MS-Word and returned via e-mail to [email protected]. If you prefer, you may print copies of the questionnaire and complete them by hand. If you choose to print the questionnaire, please mail or fax it to: Nico Sloss, Senior Analyst 10 Causeway Street, Suite 575 Fax: (617) 788-0505 Please use your mouse to navigate by clicking on the field or check box you wish to answer. To select a check box or button, simply click on the center of the box. To change or deselect a check box response, simply click on the check box and the ‘X’ will disappear. To answer a question that requires that you write a comment, click on the answer box ____ and begin typing. The box will expand to accommodate your answer. To help ensure consistency in survey responses from the various states, we provide definitions for terms in this survey at or near the point at which the term appears. Please consider these definitions when responding to survey questions. Phone: (617) 788-0543 e-mail: [email protected]. e-mail: [email protected] . Have there been any reviews of the recent review? [Enter quality of the data? UST management? 4 digit year.] a. Yes....... No ........ b. Yes....... No ........ c. Yes....... No ........ d. Yes....... No ........ 2. Which of the following types of information does each of the databases you listed above contain? (e.g. location) a. .......................................................... b. ........................................................... c. ........................................................... d. ........................................................... 3. What procedures are used to ensure all data contained in the databases you listed above are accurately recorded? a. ................................. b. ................................. c. ................................. d. ................................. 4. What are the known limitations of the current data (e.g., data elements that are known to be incomplete, incorrect, or out-of-date) for each of the databases you listed above? a.... ........................... None b. .. ........................... None c. ........................... None d. ........................... None 5. Is there any additional information about the way your states’ data for UST management is collected, entered, stored, and quality reviewed that would help inform our interpretation of these data? Federally regulated USTs: In this survey we are concerned with federally regulated USTs, as defined by EPA. These tanks include “any one or combination of tanks (including underground pipes connected thereto) that is used to contain an accumulation of regulated substances, and the volume of which (including the volume of underground pipes connected thereto) is 10 percent or more beneath the surface of the ground. This term does not include any: (a) farm or residential tank of 1,100 gallons or less capacity used for storing motor fuel for noncommercial purposes; (b) tank used for storing heating oil for consumptive use on the premises where stored; (c) septic tank; (d) certain pipeline facilities; (e) surface impoundment, pit, pond, or lagoon; (f) storm- water or wastewater collection system; (g) flow-through process tank; (h) liquid trap or associated gathering lines directly related to oil or gas production and gathering operations; or (i) storage tank situated in an underground area if the storage tank is situated upon or above the surface of the floor.” 6. What are the cumulative data for the number of federally regulated USTs for your state, first as of September 30, 2004, and then as of September 30, 2005? Are these numbers exact or estimated? Cumulative data, as of estimate? estimate? a. Active tanks ................. b. Closed tanks ................. c. Confirmed releases....... d. Cleanups initiated......... e. Cleanups completed ..... f. Emergency responses... 7. How many new releases from federally regulated USTs did your state confirm in each of the last five federal fiscal years? Is this an exact number or an estimate? a. 2001 ............................ b. 2002 ............................ c. 2003 ............................ d. 2004 ............................ e. 2005 ............................ f. How did you calculate the number of new releases from USTs your state confirmed in each of the last five years? 8. How many new releases from federally regulated USTs do you estimate that your state will confirm over the next five years? Estimated number of new releases your state will confirm over the next five years a. How did you estimate the number of new releases from federally regulated USTs that your state will confirm over the next five years? b. How many of these new releases over the next five years do you estimate will require at least some amount of public funding to clean up? Estimated number of new releases over the next five years that will require at least some amount of public funding to clean up c. How did you estimate the number of new releases over the next five years that will require at least some amount of public funding to clean up? 9. EPA computes your state’s “cleanup backlog” by subtracting the cumulative number of cleanups completed from the cumulative number of confirmed releases. Based on the numbers you provided above, your state’s cleanup backlog, as of September 30, 2005 is: 0. Is this number correct? Yes .................... No...................... a. What is the correct number? Please refer to this number when answering questions about your state’s cleanup backlog in completing the remainder of this survey. 10. Considering the number of releases in your state’s cleanup backlog as of September 30, 2005, how many involve MtBE at levels requiring cleanup? Is this an exact number or an estimate? Number of releases with MtBE ...... a. How did you calculate the number of releases in your state’s cleanup backlog that involve the release of MtBE at levels requiring cleanup? 11. Considering the number of releases in your state’s cleanup backlog as of September 30, 2005, how many have affected groundwater at levels requiring cleanup? Is this an exact number or an estimate? groundwater .................................... a. How did you calculate the number of releases in your state’s cleanup backlog that have affected groundwater at levels requiring cleanup? 12. Is there any additional information about your state’s data for UST management that would help inform our interpretation of your responses to questions about the scope and type of UST cleanups in your state? Sources of Funding for Cleanup Public funding: Includes any funding controlled and/or provided by state and federal agencies— for example, funds from the federal LUST Trust Fund, state financial assurance funds, other state funds that have not been approved by EPA to serve as financial responsibility mechanisms, or funds appropriated by the state to pay for cleanup that would not otherwise occur. Do not include funds spent by federal, state, and local governmental agencies to clean up releases from tanks they either own or operate—these funds would be considered to be provided by the responsible party. State financial assurance fund: Any state fund used to pay for cleanups of releases from federally regulated USTs. We do not make a distinction between funds that EPA has approved as a financial responsibility mechanism and those that EPA has not approved. We recognize that in some cases a state’s fund may cover tanks without a viable responsible party as well as other types of tanks. In those cases where a state fund is dedicated solely to coverage of tanks without a viable responsible party, the survey provides a separate space to answer questions about such a fund. Responsible party funding: Includes both direct expenditures by responsible parties (for example, a tank owner paying out-of-pocket for all or a portion of the costs of cleanup) and indirect expenditures (for example, a tank owner’s insurance company paying for all or a portion of the funded by responsible parties, not specific cost data. EPA-defined site cleanup costs: “All costs associated with site response concerning prevention or mitigation of threats to public health, welfare, or the environment that may occur by a release (or suspected release) of petroleum from an underground storage tank. These costs include emergency responses, site investigations, exposure assessments, the planning and design of corrective action, and the conduct, management and oversight of long-term remedial corrective actions.” 13. Considering the number of releases in your state’s cleanup backlog as of September 30, 2005, what is your estimate of the number of these releases for which EPA-defined site cleanup costs will be paid for exclusively by responsible parties and the number of these releases for which some amount of public funding will be required? [Please provide your best estimate.] a. number of releases for which cleanup costs will be paid exclusively by a b. number of releases for which cleanup costs will be paid with some amount of c. How did you estimate the number of releases for which EPA-defined site cleanup costs will be paid for exclusively by responsible parties and the number of these releases for which some amount of public funding will be required? 14. Among those releases in the current cleanup backlog that will require at least some amount of public funding after September 30, 2005, excluding funds already spent on these cleanups, how many would you estimate will require public funding amounts in the following ranges? a. $0 - $99,999 ............................................... b. $100,000-$499,999 .................................... c. $500,000-$999,999 .................................... d. $1,000,000 or more .................................... Number for which costs cannot be e. estimated.................................................... f. How did you estimate the number of releases that fall into each category of funding? 15. Based on your experience with cleanups of leaking federally regulated USTs that require some amount of public funding, what do you estimate is the average cost in public funds to fully address each release? Estimated average cost in public funds to fully address each release a. What do you estimate is the average cost in public funds to fully address each release that involves MtBE at levels requiring cleanup? Estimated cost in public funds to fully address each release that involves MtBE at b. What do you estimate is the average cost in public funds to fully address each release that impacts groundwater at levels requiring cleanup? Estimated cost in public funds to fully address each release that impacts groundwater at levels requiring cleanup c. How did you estimate these average costs in public funding? 16. In the past year, what was the total amount spent (in actual outlays) to pay for the public funding portion of cleanups at federally regulated UST sites? [If possible, provide this amount for the latest federal fiscal year (10/1/04 to 9/30/05).] Is this an exact number or an estimate? Total outlays................................... a. Are these amounts for the latest federal fiscal year (10/1/04 to 9/30/05)? Yes .................... No...................... b. On what date does the year for which you are reporting start? Is this an exact number or an estimate? a. Federal LUST Trust Fund .............. b. Other federal sources .............................................. c. State financial assurance fund ........ d. State fund dedicated to tanks without a viable owner ................................ .............................................. g. Are these amounts for the latest federal fiscal year (10/1/04 to 9/30/05)? Yes .................... No...................... h. On what date does the year for which you are reporting start? 18. Is there any additional information about the amounts of public funding listed above that would help inform our interpretation of your responses to these questions? 19. What additional sources of public funding for LUST cleanups, if any, do you believe have a high probability of becoming available in the next 5 years? a. What were the primary factors you considered in making an assessment of the probability that additional sources of public funding for LUST cleanups will become available in the next 5 years? 20. Among the current sources of public funding for LUST cleanups, which sources, if any, do you believe have a high probability of no longer being available in the next 5 years, and why not? a. What were the primary factors you considered in making an assessment of the probability that any of the current sources of public funding for LUST cleanups will no longer be available in the next 5 years? Tanks without a Viable Owner For purposes of this survey, consider tanks among your state’s cleanup backlog without a viable owner to be those tanks where, as of September 30, 2005, the responsible party was unknown, unwilling, or unable to perform the needed cleanup (for example, “orphaned” or abandoned tanks). 21. How many releases in your state’s cleanup backlog, identified above, come from tanks without a viable owner? Is this an exact number or an estimate? Number of releases ......................... a. How did you calculate the number of releases in your state’s cleanup backlog that come from tanks without a viable owner? 22. For how many releases in your state’s cleanup backlog has the state not yet determined whether a responsible party is known, willing, and able to perform the cleanup? Is this an exact number or an estimate? Number of releases .......................... a. How did you calculate the number of releases in your state’s cleanup backlog for which the state has not yet determined whether a responsible party is known, willing, and able to perform the cleanup? 23. In the past year, how much public funding from all sources was spent (in actual outlays) on cleaning up releases from USTs, both WITH and WITHOUT a viable owner? [If possible, provide these amounts for the latest federal fiscal year, 10/1/04 to Is this an exact number or an estimate? a. Public funding spent to clean up tanks WITH a viable owner ....... b. Public funding spent to clean up owner........................................... c. Is this the total amount ($0) spent (in actual outlays) in the past year for the public funding portion of cleanups at federally regulated UST sites? Yes .................... No...................... d. Why is this amount different? e. Is this amount for the latest federal fiscal year (10/1/04 to 9/30/05)? Yes .................... No...................... f. On what date does your reporting year begin? Beginning date of reporting year g. How did you calculate the amount of public funding spent to clean up releases from USTs with and without a viable owner? 24. Based on the backlog of releases from USTs without a viable owner as of September 30, 2005, identified above [ ], how much do you estimate it will cost to complete the remainder of the cleanups for all of these releases? None ................. Go to Q25. Don’t know....... Go to Q25. a. How did you estimate the cost to complete the remainder of the cleanups for the backlog of releases from USTs without a viable owner? For the purposes of the following questions, please consider two types of state cleanup funds: (1) state financial assurance funds that cover the cleanup of contamination from federally regulated USTs, which may or may not include tanks without a viable owner, and (2) funds that are devoted solely to the cleanup of contamination from tanks without a viable owner. In this survey we ask about both types of funds as applicable to your state. 25. Has your state EVER had a state financial assurance fund, as defined in (1) above? Yes ....................... No......................... Skip to Q37. 26. What was the status of your state’s financial assurance fund as of September 30, 2005? Not applicable, never had this type of fund....................... Skip to Q37. Fund is no longer active ....................................................... Skip to Q37. Accepting and paying all valid claims without restriction Go to Q27 Accepting and paying claims with some restrictions ....... a. Is your state fund limiting the number of claims it accepts based on the amount of funds it has available to pay for those claims? Yes ....................... No......................... b. Is your state fund setting priorities for paying claims to conserve funds? Yes ....................... No......................... c. Is your state’s financial assurance fund ONLY accepting claims for releases that occurred before or after a certain date (e.g., an eligibility sunset date)? No ....................... Yes...................... d. What are the eligibility dates for releases? [Fill in either or both dates as applicable.] [Enter mm/dd/yy.] [Enter mm/dd/yy.] e. Are there other restrictions on accepting and paying claims? 27. Please describe the deductible amount(s) paid by responsible parties and any maximum amount the state financial assurance fund will pay for each release from a federally regulated UST. 28. How much was deposited into your state’s financial assurance fund in the past year? Is this an exact number or an estimate? Amount deposited........................... a. Is this amount for the latest federal fiscal year (10/1/04 to 9/30/05)? Yes......................... No .......................... b. On what date does your reporting year begin? Beginning date of reporting year 29. How much of the amount deposited into your state’s financial assurance fund in the past year was from each of the following sources? Is this an exact number or an estimate? a. Flat rate fees assessed on tanks.. b. Fees/taxes assessed on a per-unit basis on fuel(s)........................... c. Interest ....................................... d. Cost recovery ............................. . . . 30. What do you anticipate will happen to revenues to your state's financial assurance fund over the next 5 years, compared with the annual revenues accrued to the fund in the past year? Large increase................ Moderate increase......... Stay about the same...... Moderate decrease........ Large decrease............... a. What were the primary factors you considered in making this assessment? 31. What was the overall balance of your state’s financial assurance fund as of September 30, 2005? Is this an exact number or an estimate? Is this an exact number or an estimate? Obligated........................................ 32. What was the amount of the outstanding claims (claims received by the state program for which funds have not yet been obligated) on your state’s financial assurance fund as of September 30, 2005? Is this an exact number or an estimate? Outstanding claims ......................... $ 33. During the past 5 years, what amount of funding, if any, did your state divert from its financial assurance fund for purposes other than those related to the UST program? Is this an exact number or an estimate? Is this an exact number or an estimate? Reimbursed ................................. g. If applicable, for what purposes did your state divert UST financial assurance funds? 34. As of September 30, 2005, had your state decided to stop accepting new claims against the state financial assurance fund after a certain date? Date after which claims will no longer be accepted No decision made to stop accepting claims......... 35. As of September 30, 2005, had your state decided to stop collecting revenues for the state financial assurance fund after a certain date? Date after which revenues will no longer be collected No decision made to stop collecting claims ........ 36. How capable is your state’s financial assurance fund of meeting future demands upon it? . Able to meet all......................................................................... Able to meet most .................................................................... Able to meet some ................................................................... Not able to meet any................................................................ a. What were the primary factors you considered in making this assessment? State Fund Dedicated to Tanks without a Viable Owner 37. What was the status of your state’s fund dedicated to tanks without a viable owner as of September 30, 2005? . Not applicable, never had this type of fund.......................... Skip to Q47. Fund is no longer active .......................................................... Skip to Q47. Accepting and paying all valid claims without restriction.. Go to Q38 Accepting and paying claims with some restrictions .......... a. Is your state fund limiting the number of claims it accepts based on the amount of funds it has available to pay for those claims? Yes ....................... No......................... b. Is your state fund setting priorities for paying claims to conserve funds? Yes ....................... No......................... c. Is your state’s fund dedicated to tanks without a viable owner ONLY accepting claims for releases that occurred before or after a certain date (e.g., an eligibility sunset date)? No...................... Yes .................... d. What are the eligibility dates for releases? [Fill in either or both dates as applicable.] [Enter mm/dd/yy.] [Enter mm/dd/yy.] e. Are there other restrictions on accepting and paying claims? 38. How much was deposited into your state’s fund dedicated to tanks without a viable owner in the past year? Is this an exact number or an estimate? Amount deposited........................... a. Is this amount for the latest federal fiscal year (10/1/04 to 9/30/05)? Yes .................... No...................... b. On what date does your reporting year begin? Beginning date of reporting year 39. Approximately how much of the amount deposited into your state’s fund dedicated to tanks without a viable owner in the past year was from each of the following sources? Is this an exact number or an estimate? a. Flat rate fees assessed on tanks....... b. Fees/taxes assessed on a per-unit basis on fuel(s)................................ c. Interest ............................................ d. Cost recovery .................................. ...... ...... ...... 40. What do you anticipate will happen to revenues to your state's fund dedicated to tanks without a viable owner over the next 5 years, compared with the annual revenues accrued to the fund in the past year? Large increase................ Moderate increase......... Stay about the same...... Moderate decrease........ Large decrease............... a. What were the primary factors you considered in making this assessment? 41. What was the overall balance of your state’s fund dedicated to tanks without a viable owner as of September 30, 2005? Is this an exact number or an estimate? Is this an exact number or an estimate? Obligated ...................................... 42. What was the amount of the outstanding claims (claims received by the state program for which funds have not yet been obligated) on your state’s fund dedicated to tanks without a viable owner as of September 30, 2005? Is this an exact number or an estimate? $ 43. During the past 5 years, what amount of funding, if any, did your state divert from its fund dedicated to tanks without a viable owner for purposes other than those related to the UST program? Is this an exact number or an estimate? a. 2001.......................................... b. 2002.......................................... c. 2003.......................................... d. 2004.......................................... e. 2005.......................................... f. How much, if any, of the total amount diverted over the past 5 years had been reimbursed to the state fund dedicated to tanks without a viable owner as of September 30, 2005? Is this an exact number or an estimate? Reimbursed ................................. g. If applicable, for what purposes did your state divert funds from the fund dedicated to tanks without a viable owner? 44. As of September 30, 2005, had your state decided to stop accepting new claims against the state fund dedicated to tanks without a viable owner after a certain date? Date after which claims will no longer be accepted No decision made to stop accepting claims......... 45. As of September 30, 2005, had your state decided to stop collecting revenues for the state fund dedicated to tanks without a viable owner after a certain date? Date after which revenues will no longer be collected No decision made to stop collecting claims ........ 46. How capable is your state’s fund dedicated to tanks without a viable owner of meeting future demands upon it? . Able to meet all......................................................................... Able to meet most .................................................................... Able to meet some ................................................................... Not able to meet any................................................................ Is this an exact number or an estimate? b. Enforcement costs .................... c. Site cleanup costs ..................... d. Total costs ................................ e. Are these amounts for the latest federal fiscal year (10/1/04 to 9/30/05)? Yes .................... No...................... f. On what date does your reporting year begin? Beginning date of reporting year 49. According to EPA, states may sometimes not spend a given year’s entire LUST Trust Fund award in the year the funds are provided. As of September 30, 2005, what was the state’s unobligated balance of federal LUST Trust Funds, if any? Is this an exact number or an estimate? State’s unobligated balance of federal LUST Trust Funds .............. 50. Is there any additional information about the LUST Trust Fund amounts listed above that would help inform our interpretation of your responses to these questions? 51. How do responsible parties who have active federally regulated USTs in your state demonstrate financial responsibility? Please list the number of tanks covered by the various available financial responsibility mechanisms below. Is this an exact number or an estimate? a. State financial assurance fund......... b. Financial test of self-insurance ....... c. Corporate guarantee ........................ d. Insurance coverage.......................... e. Surety bond ..................................... f. Letter of credit................................. g. Trust fund set up by owner or operator ........................................ h. Bond rating test (local government only) ............................................. i. Financial test (local government only) ............................................. j. Guarantee from another local government or the state (local government only) ......................... k. A dedicated fund (local government only) ............................................. l. ................ 52. Is there any additional information about how you calculated the number of tanks covered by the various financial responsibility mechanisms that would help inform our interpretation of your responses? 53. Which of the following describe your state’s procedures for determining whether a tank owner’s financial responsibility is current? a. Attempt to check financial responsibility on a regular basis (e.g. during regular inspections) ................................................................................... b. Target inspections (e.g. to USTs deemed likely to not be current on financial responsibility) ............................................................................. c. As events warrant (e.g. upon tank installation or upgrade, upon a release).. d. State does not check financial responsibility ................................................ ....................................................................... 54. How frequently, if at all, does your state check whether a tank owner’s financial responsibility is current? At least annually ................................................................................................ Every 1 to 2 years .............................................................................................. Every 3 years or longer...................................................................................... State does not check financial responsibility..................................................... ........................................................................... 55. Over the past 5 years, how many cases has your state encountered in which tank owners did not have adequate financial responsibility? Is this an exact number or an estimate? Number of cases ............................ a. How did you calculate the number of cases that your state has encountered over the past 5 years in which tank owners did not have adequate financial responsibility? 56. Does your state ever impose penalties on responsible parties for multiple releases from federally regulated USTs? Yes ....................... No......................... Go to next question. a. What are the penalties and the circumstances under which they would be imposed? 57. Are there any issues that have not been covered in this survey that you anticipate affecting the availability of public finding for cleanups in the next 5 years? 58. Are there any additional comments you wish to make regarding the issues in this survey or other matters related to USTs? Thank you for completing the survey! Please save this file now and send an e-mail with your saved questionnaire file and supporting documentation as an attachment to: [email protected]. In addition to the individual named above, Vincent P. Price, Assistant Director; Krista Breen Anderson; Jenny Chanley; Richard P. Johnson; Jerry Laudermilk; Jennifer Lutzy McDonald; Anne McDonough-Hughes; Rebecca Shea; Carol Herrnstadt Shulman; Dominique Sasson; and Nico Sloss made key contributions to this report.
Underground storage tanks that leak hazardous substances can contaminate nearby groundwater and soil. Under the Resource Conservation and Recovery Act (RCRA), tank owners and operators are primarily responsible for paying to clean up releases from their tanks. They can demonstrate their financial responsibility by using, among other options, publicly funded state financial assurance funds. Such funds function like insurance and are intended to ensure timely cleanup. These funds also pay to clean up releases from tanks without a viable owner, as does the federal Leaking Underground Storage Tank (LUST) Trust Fund. GAO was asked to report on (1) states' estimates of the public costs to clean up known releases, (2) states' primary sources of cleanups funding and their viability, and (3) federal sources to address these releases. GAO surveyed all states and discussed key issues with EPA and selected state officials. States estimated that fully cleaning up about 54,000 of the approximately 117,000 releases (leaks) known to them as of September 30, 2005, will cost about $12 billion in public funds. The Environmental Protection Agency (EPA) estimates that it costs an average of about $125,000 to fully clean up a release. State officials said that tank owners or operators will pay to clean up most of the remaining 63,000 releases. However, an unknown number of releases lack a viable owner, and the full extent of the cost to clean them up is unknown. A tank owner may not be viable because the owner fails to maintain adequate financial responsibility coverage, which is intended to provide some assurance that the owner has access to funds to pay for cleanups. While 16 states require annual proof of coverage, 25 states check owners' coverage less often or not at all. Furthermore, 43 states expect to confirm about 16,700 new releases in the next 5 years that will require at least some public funds for cleanup. States reported that they primarily use financial assurance funds to pay the costs of cleaning up leaks. States reported that they spent an estimated $1.032 billion from financial assurance funds to clean up tank releases in 2005. Overall, fund revenues totaled about $1.4 billion in 2005, of which about $1.3 billion came from state gasoline taxes. The assurance funds in the 39 states for which GAO has information held an estimated $1.3 billion as of September 30, 2005, according to state officials. However, many states also use these funds to clean up releases from sources other than underground tanks. Several state assurance funds may lack sufficient resources to ensure timely cleanups. While EPA monitors the status of state funds, its method of monitoring the soundness of these funds has limitations. Furthermore, there are concerns that, by paying the bulk of the cleanup costs, state financial assurance funds may provide disincentives for tank owners--who pay only a relatively small deductible--to prevent releases. In addition to their own funds, states employ resources from the LUST Trust Fund, the primary federal source of funds for cleaning up releases from underground storage tanks. As of September 30, 2005, the fund balance was about $2.5 billion. For fiscal year 2005, the Congress appropriated about $70 million from the fund to help EPA and the states clean up releases and to oversee cleanup activities. EPA distributed about $58 million of this amount to the states to investigate and clean up releases and conduct enforcement efforts, among other actions. To distribute LUST Trust Fund money among the states, EPA uses a formula that includes a base amount for each state and factors to recognize states' needs and past cleanup performance. However, although the LUST Trust Fund provides funds to states to assist in addressing releases from tanks without a viable owner, EPA has not incorporated this factor into its formula. Furthermore, EPA's information on states' performance comes from state reports; however, GAO found that some of the information in these reports is inaccurate and inconsistent.
Since 1982, the federal government has passed a number of laws that address the role of the crime victim in the criminal justice system, including the Victim and Witness Protection Act of 1982, Victims of Crime Act of 1984, Victims’ Rights and Restitution Act of 1990, Violent Crime Control and Law Enforcement Act of 1994, Mandatory Victims Restitution Act of 1996, Victim Rights Clarification Act of 1997, and Crime Victims’ Rights Act of 2004. Several of these statutes provided crime victims with rights, but they also directed federal officials to provide victims with various services, such as notification of certain public court proceedings. In particular, the Victims’ Rights and Restitution Act of 1990 identified crime victims’ rights, delineating seven such rights and requiring federal officials to make their best efforts to see that crime victims are accorded these rights. The 1990 law also included a separate provision, codified at 42 U.S.C. § 10607, that requires federal officials to identify crime victims and provide them information about their cases and about services that may be available to them. For example, the law requires officials to inform victims of a place where they may receive emergency medical and social services, to inform victims of programs that are available to provide counseling, treatment, and other support to the victim, and to assist victims in contacting persons who can provide such services. On October 30, 2004, the Crime Victims’ Rights Act, as a component of the Justice for All Act, was signed into law. The CVRA left in place 42 U.S.C. § 10607—the provision requiring federal officials to inform victims about their cases and about services available to them—but the CVRA modified the provision from the 1990 law regarding crime victims’ rights and identified eight rights for federal crime victims, some of which were similar to the rights from the 1990 law and others of which were new. The CVRA provided that crime victims have the following rights: the right to be reasonably protected from the accused; the right to reasonable, accurate, and timely notice of any public court proceeding, or any parole proceeding, involving the crime or of any release or escape of the accused; the right not to be excluded from any such public court proceeding, unless the court, after receiving clear and convincing evidence, determines that testimony by the victim would be materially altered if the victim heard other testimony at that proceeding; the right to be reasonably heard at any public proceeding in the district court involving the release, plea, sentencing, or any parole proceeding; the reasonable right to confer with the attorney for the government in the case; the right to full and timely restitution as provided in law; the right to proceedings free from unreasonable delay; and the right to be treated with fairness and with respect for the victim’s dignity and privacy. The CVRA also established two mechanisms to ensure adherence to victims’ rights under the law, neither of which had been available under previous statutes. Specifically, to ensure that DOJ employees are complying with CVRA requirements, the law directed DOJ to designate an administrative authority to receive and investigate complaints relating to the provision or violation of crime victims’ rights. To comply with this provision in the statute, DOJ issued regulations creating the Victims’ Rights Ombudsman. The VRO is a position within the Executive Office of United States Attorneys—the DOJ division responsible for facilitating coordination between USAOs, evaluating USAO performance, and providing general legal interpretations and opinions to USAOs, among other things. Federal crime victims may submit written complaints to the designated point of contact for the DOJ division that is the subject of the complaint, who then investigates the complaint and reports the results of the investigation to the VRO. Victims may also submit complaints directly to the VRO. If the VRO finds that an employee failed to afford a CVRA right to a victim, the VRO must require that employee to undergo training on victims’ rights. If based on an investigation the VRO determines that an employee willfully and wantonly failed to provide a victim with a CVRA right, the VRO must recommend a range of disciplinary sanctions to the official authorized to take action on disciplinary matters for the relevant office. The CVRA does not require DOJ employees to provide relief to victims whose rights have been violated, but the VRO guidelines do require investigators, to the best of their ability, to resolve complaints to the victims’ satisfaction. The CVRA also enables victims to assert their rights in district court by filing a motion—which they can do either verbally or per a written request—with the court. Unlike the complaint process, this mechanism allows victims to assert their rights and seek relief from the court, and can be employed not only when victims believe that a DOJ employee violated their rights, but when they have general concerns regarding the provision of their rights. If the district court denies the victim’s request regarding the provision of CVRA rights—such as a request to be heard at a hearing—the victim can petition the court of appeals for a writ of mandamus. Thus, if the court of appeals grants the victim’s petition, it may direct the district court to take actions to afford CVRA rights to the victim. Petitions for writs of mandamus can be filed at any point in the case. The CVRA authorized appropriations for fiscal years 2005 through 2009. However, it is unclear whether and exactly how much of this funding was appropriated because funds that may have been appropriated under the CVRA were likely appropriated in a lump sum with funds for other victim assistance and grant programs. The authorized amounts, years, and purposes are listed in table 1. DOJ and the federal judiciary have made various efforts to implement the CVRA—from revising internal guidelines and developing training materials for DOJ staff and judges to providing victims with emergency, temporary housing in some cases to protect them from the accused offender and proactively asking victims if they would like to speak in court. Additionally, DOJ and the federal judiciary have taken actions to address four factors that have affected CVRA implementation, including the characteristics of certain cases, the increased workload of some USAO staff, the scheduling of court proceedings, and diverging interests between the prosecution and victims. First, the characteristics of certain cases, such as the number of victims involved and the location of the victims, make it difficult to afford victims certain CVRA rights. For instance, USAO staff stated that it can be difficult to provide timely notification of court proceedings to victims located on Indian reservations because the victims may not have access to a mailbox, a telephone, or the Internet. To address this challenge, victim-witness personnel said that they have driven to Indian reservations to personally inform victims of upcoming court proceedings. Second, due to CVRA requirements, particularly notification requirements, USAO victim-witness staff face an increased workload—about 45 percent of staff who responded to our survey reported working an average of about 6 additional hours per week in order to meet CVRA requirements. DOJ has made efforts to address this issue by providing funding to 41 of the 93 USAOs to hire contractors to assist with clerical duties related to victim notification. Third, inherent characteristics of the criminal justice process, such as the short period of time over which pretrial proceedings are scheduled and take place, make it difficult to provide timely notice to crime victims and afford them their right to be heard. For example, according to the investigative agents, USAO staff, and one magistrate judge with whom we met, a detention hearing—which is a judicial proceeding used to determine whether a defendant should remain in custody before her or his trial—typically takes place within a few days of an arrest (as generally required by federal law), and in certain situations, can occur within hours of an arrest. When faced with this challenge, USAO victim-witness personnel said that they have notified victims of court proceedings by telephone rather than mail, which may not arrive in enough time to enable the victim to attend the proceeding. Fourth, diverging interests between the prosecution and victims may affect the way in which the government affords victims their CVRA rights. For instance, according to DOJ, it is not always in the interest of a successful prosecution for victims to be notified of and attend a plea hearing for a cooperating defendant who agrees to testify against or provide information about other defendants in the case in exchange for a lesser sentence. The concern is that public knowledge of the defendant’s cooperation could compromise the investigation, as well as bring harm to the defendant and others. DOJ officials stated that this issue occurs frequently in gang-related prosecutions, where, for instance, the victim is a member of the defendant’s rival gang. DOJ’s efforts to address this issue include requesting that the court close plea agreement proceedings— which may prevent the victim from attending such proceedings since victims’ right not to be excluded only applies to public court proceedings—and proposing legislation to revise the CVRA to allow for an exception to victims’ notification rights in these instances. To enforce the provisions of the CVRA, the act established two mechanisms to help victims ensure that their rights are granted. These mechanisms include processes by which victims can submit complaints against DOJ employees whom they believe violated their rights and file motions in court related to their rights. However, many of the victims who responded to our survey reported that they were not aware of these enforcement mechanisms. Of the more than 1.1 million federal crime victims who, as of September 4, 2009, were identified in DOJ’s Victim Notification System as having active cases, the Victims’ Rights Ombudsman—DOJ’s designated authority to receive and investigate federal crime victim complaints regarding employee compliance to the CVRA—received 259 written complaints from December 2005 through August 2009. The VRO closed 235 complaints following a preliminary investigation, primarily because the complaints were related to a state or local matter as opposed to a federal matter or it was determined that the individual was not a federal crime victim. Lastly, the VRO determined that of the 19 complaints that warranted further investigation, in no instance did a DOJ employee or office fail to comply with the provisions of the law pertaining to the treatment of these federal crime victims. We did not make a judgment on the reasonableness of the VRO’s rationale for dismissing these complaints because we did not conduct an independent investigation of each complaint. Several contributing factors most likely explain the low number of complaints filed by federal crime victims against DOJ employees. First, DOJ officials believe few victims have filed complaints because victims are generally satisfied with DOJ’s efforts to afford them their rights. Second, USAO officials we spoke with have made efforts to resolve complaints directly before they reached a point where a victim would file a complaint with the VRO. Third, victims reported a lack of awareness about the complaint process itself. Specifically, 129 of the 235 victims who responded to our survey question regarding the complaint process reported that they were not aware of it, and 51 did not recall whether they were aware. USAOs have been directed to take reasonable steps to provide notice to victims of the complaint process, and they generally do so through a brochure provided to victims at the beginning of the case. However, DOJ has opportunities to enhance victim awareness of the complaint process, such as by making greater use of office Web sites to publicize the process or, when appropriate, personally informing victims. If victims are not aware of the complaint process, it becomes an ineffective method for ensuring that the responsible DOJ officials are complying with CVRA requirements and that corrective action is taken when needed. Therefore, in our December 2008 report, we recommended that DOJ explore opportunities to enhance publicity of the victim complaint process to help ensure that all victims are made aware of it. In commenting on a draft of our report, DOJ stated that it agreed that victims should be well-informed of the complaint process and intended to take steps to enhance victim awareness. However, as of September 11, 2009, DOJ had not yet determined what steps are most appropriate, but hopes to make this decision by the end of the year. Even if victims submit complaints to DOJ regarding their CVRA rights, the lack of independence within the complaint investigation process could compromise impartiality of the investigation. Professional ombudsman standards for investigating complaints against employees, as well as the practices of other offices that investigate complaints, suggest that the investigative process should be structured to ensure impartiality. For example, in practice, the investigators are generally not located in the same office with the subject of the investigation, in order to avoid possible bias. DOJ’s Office of Professional Responsibility, which investigates other types of complaints against DOJ employees, also does not use investigators who are located in the same office with the subject of the complaint. However, under DOJ’s victim complaint investigation process, the two are generally located in the same office. In addition, in some instances the DOJ victim complaint investigator has been the subordinate or peer of the subject of the complaint. According to DOJ officials, the department structured the victim complaint investigation process as such due to resource constraints and the perception that complaints could be resolved more quickly if addressed locally. However, this structure gives the appearance of bias in the investigation, which raises questions as to whether DOJ employees’ violation of victims’ rights will be overlooked and employees will not receive appropriate training on the treatment of crime victims or disciplinary sanctions. In our December 2008 report, we recommended that DOJ restructure the process for investigating federal crime victim complaints in a way that ensures independence and impartiality, for example, by not allowing individuals who are located in the same office with the subject of the complaint to conduct the investigation. In commenting on a draft of our report, DOJ stated that it recognized the benefits of having an investigation process that ensures independence and impartiality and that the working group, in consultation with the VRO, would explore several options that will address this concern. Subsequently, DOJ reported that on July 31, 2009, the VRO issued guidance to ensure that complaint investigators refer to the VRO any complaint where the investigator’s review of the complaint would raise an actual or apparent conflict of interest. If the VRO determines that such a conflict exists, the VRO would consider reassigning the complaint to someone in a different office for investigation. Among the hundreds of thousands of cases filed in the U.S. district courts in the nearly 5-year period since the CVRA was enacted, we found 49 instances in which victims, or victims’ attorneys or prosecutors on behalf of victims, asserted CVRA rights by filing a motion—either verbally or in writing—with the district court. We also found 27 petitions for writs of mandamus that were filed with the appellate courts, the majority of which were in response to motions previously denied in the district court. Table 2 summarizes the number of times CVRA rights were asserted in the district and appellate courts and how the courts ruled in those instances. Victim attorneys and federal judicial officials gave several potential reasons for the low number of victim motions, including victims being satisfied with how they were treated and victims either being intimidated by the judicial process or too traumatized by the crime to assert their rights in court. However, the most frequently cited reason for the low number of motions was victims’ lack of awareness of this enforcement mechanism. The results of our victim survey also suggest that victims lack this awareness. Specifically, 134 of the 236 victims who responded to our survey question regarding filing motions reported that they were not aware of their ability to file a motion to assert their rights in district court, and 48 did not recall whether they were aware. DOJ generally does not inform victims of their ability to assert their rights in court. While the CVRA does not explicitly require DOJ to do so, the law does direct DOJ to inform victims of their eight CVRA rights and their ability to seek the advice of an attorney. Thus, DOJ may be the most appropriate entity to inform victims of this provision as well. In addition, DOJ’s guidelines state that responsible officials should provide information to victims about their role in the criminal justice process, which could include their ability to file motions with regard to their CVRA rights. If victims are not aware of their ability to assert their rights in court, it will reduce the effectiveness of this mechanism in ensuring adherence to victims’ rights and addressing any violations. In our December 2008 report, we recommended that DOJ establish a mechanism for informing all victims of their ability to assert their CVRA rights by filing motions and petitions for writs of mandamus, such as by incorporating this information into brochures and letters sent to victims and on agency Web sites. In commenting on a draft of our report, DOJ stated that it agreed that victims should be well-informed of their ability to assert their CVRA rights in district court and intended to take steps to enhance victim awareness. However, as of September 11, 2009, DOJ had not yet decided upon an approach for enhancing victim awareness, but hopes to make this decision by the end of the year. Several key issues have arisen as courts interpret and apply the CVRA in cases, including (1) when in the criminal justice process CVRA rights apply, (2) what it means for a victim to be “reasonably heard” in court proceedings, (3) which standard should be used to review victim appeals of district court decisions regarding CVRA rights, and (4) whether the CVRA applies to victims of local offenses prosecuted in the District of Columbia Superior Court. First, the courts have issued varied decisions regarding whether CVRA rights apply to victims of offenses that DOJ has not charged in court, stating that the law applies in some circumstances and not in others. While some courts have stated that CVRA rights doe not apply unless charges have been filed, other courts have stated that certain VCRA rights, under particular circumstances, may apply to victims of offenses that are investigated but have not been charged in court. In implementing the CVRA, DOJ has specified in its guidelines that CVRA rights do not apply unless charges have been filed against a defendant, based on its initial interpretation of the law, but is reviewing its policy in response to a court ruling in 2008. On September 11, 2009, DOJ informed us that the department was initiating a review of the Attorney General Guidelines for Victim and Witness Assistance—which provides guidance to DOJ prosecutorial, investigative, and correctional components related to the treatment of crime victims—and any changes to the department’s position on when CVRA rights apply would be reflected in the revised guidance. DOJ is uncertain when the revised guidelines will be issued. Second, the courts have issued varied rulings that interpret the meaning of the right to be “reasonably heard” at court proceedings, with, for example, one court ruling that the right to be heard gave victims the right to speak and another ruling that the right could be satisfied by a written statement, given the specific facts of the case. Third, the courts have differing interpretations regarding which standard should be used to review victim appeals of district court decisions regarding CVRA rights. Typically, when a party appeals a district court decision to a court of appeals, the court of appeals reviews the district court decision using what may be called the ordinary appellate standard of review. Under this standard, the court of appeals reviews the district court decision for legal error or abuse of discretion. In contrast to an appeal, a petition for a writ of mandamus is a request that a superior court order a lower court to perform a specified action, and courts of appeals review these petitions under a standard of review that is stricter than the ordinary appellate standard of review. Under the standard traditionally used to review petitions for writs of mandamus, petitioners must show that they have no other adequate means to attain the requested relief, that the right to the issuance of the writ is clear and indisputable, and that the writ is appropriate under the circumstances. As of July 2008, 4 of the 12 circuits were split on which standard of review should be used to review petitions for writs of mandamus under the CVRA. When new legislation is enacted, the courts typically interpret the law’s provisions and apply the law as cases arise. As rulings on these cases are issued, the courts build a body of judicial decisions—known as case law— which helps further develop the law. The issues discussed above have arisen as cases have come before the courts, largely via motions and petitions for writs of mandamus under the CVRA, and the rulings on these issues will likely contribute to the further development of case law related to the CVRA. However, DOJ and D.C. Superior Court officials stated that a statutory change would be beneficial in resolving the issue of CVRA applicability to the D.C. Superior Court. The CVRA defines a crime victim as “a person directly and proximately harmed as a result of a federal offense or an offense in the District of Columbia.” At the same time, multiple provisions of the CVRA refer to district courts, which do not include the D.C. Superior Court. While it is apparent that the CVRA applies to victims whose federal offenses are prosecuted in the U.S. district court in the District of Columbia, the CVRA is not explicit about whether the law applies to victims of local offenses prosecuted in the D.C. Superior Court. As a result, some judges in the D.C. Superior Court are applying the CVRA, and others are not. In implementing the CVRA, DOJ operates as if the CVRA applies to victims of local offenses in the District of Columbia, and in July 2005, DOJ proposed legislation to clarify whether the CVRA applies to cases in the D.C. Superior Court, but no legislation had been passed. Without clarification on this issue, the question of whether the D.C. Superior Court has responsibility to implement the CVRA will remain, and judges in the D.C. Superior Court may continue to differ in whether they apply the law in their cases. As a result, victims may be told they are entitled to CVRA rights by DOJ, but whether they are afforded these rights in Superior Court proceedings will depend on which judge is presiding over their case. In our December 2008 report, we suggested that Congress consider revising the language of the CVRA to clarify this issue. As of September 2009, no related legislation had been introduced. Perceptions are mixed regarding the effect and efficacy of the implementation of the CVRA, based on factors such as awareness of CVRA rights; victim satisfaction, participation, and treatment; and potential conflicts of the law with defendants’ interests. For example, while a majority of federal crime victims who responded to our survey reported that they were aware of most of their CVRA rights, less than half reported that they were aware of their right to confer with the prosecutor. In addition, victims who responded to our survey reported varying levels of satisfaction with the provision of individual CVRA rights. For instance, 132 of the 169 victims who responded to the survey question regarding satisfaction with their right to notice of public court proceedings reported being satisfied with the provision of this right. In contrast, only 72 of the 229 victims who responded to the survey question regarding satisfaction with the right to confer with the prosecutor reported being satisfied with the provision of this right. The general perception among the criminal justice system participants we spoke with and surveyed is that CVRA implementation has improved the treatment of crime victims, although many also believe that victims were treated well prior to the act because of the influence of well-established victims’ rights laws at the state level. Furthermore, while 72 percent of the victim-witness personnel who responded to our survey perceived that the CVRA has resulted in at least some increase in victim attendance at public court proceedings, 141 of the 167 victims who responded to our survey question regarding participation reported that they did not attend any of the proceedings related to their cases, primarily because the location of the court was too far to travel or they were not interested in attending. Finally, defense attorneys and representatives of organizations that promote the enforcement of defendants’ rights expressed some concerns that CVRA implementation may pose conflicts with the interests of defendants. For example, victims have the right not to be excluded from public court proceedings unless clear and convincing evidence can be shown that their testimony would be materially altered if they heard the testimony of others first. However, 5 of the 9 federal defenders and 6 of the 19 district judges we met with said that it would be very difficult, if not impossible, to provide such evidence that the victim’s testimony would be materially altered. Mr. Chairman, this completes my prepared statement. I would be happy to respond to any questions you or other Members of the Subcommittee may have at this time. For questions about this statement, please contact Eileen R. Larence at (202) 512-8777 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this statement include Kristy N. Brown, Assistant Director; Tracey King; and Susan Sachs. Additionally, key contributors to our December 2008 report include Lisa Berardi Marflak, David Schneider, Matthew Shaffer and Johanna Wong, as well as David Alexander, Stuart Kaufman, and Adam Vogt. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
On October 30, 2004, the Crime Victims' Rights Act (CVRA) was enacted, establishing eight rights for federal crime victims and two mechanisms to enforce those rights. The legislation also directed GAO to evaluate the implementation of the CVRA. To address this mandate, GAO reviewed, among other things: (1) efforts made to implement the CVRA, (2) mechanisms in place to ensure adherence to the CVRA, (3) key issues that have arisen in the interpretation of the CVRA by the federal courts, and (4) perspectives of criminal justice system participants on the CVRA. This testimony is based on GAO's December 2008 report on CVRA, where GAO reviewed guidance and conducted surveys and interviews with criminal justice system participants. GAO cannot generalize its crime victim survey results due to a low response rate. In September 2009, GAO obtained updated information on victim's efforts to enforce their rights. To implement the CVRA, the Department of Justice (DOJ) and the federal judiciary have, among other things, revised internal guidelines, trained DOJ staff and judges, provided victims with emergency, temporary housing to protect them, and proactively asked victims if they would like to speak in court. DOJ and the courts have also implemented two mechanisms to ensure adherence to the CVRA, including processes for victims to submit complaints against DOJ employees and assert their rights in court; however, the majority of victims who responded to GAO's survey said they were not aware of these mechanisms. If victims are not aware of these enforcement mechanisms, they will not be effective at helping to ensure victims are afforded their rights. GAO also found that DOJ's complaint investigation process lacked independence, impeding impartiality. In July 2009, in response to our recommendation, DOJ revised its victim complaint investigation process such that if investigators who are located in the same office with the subject of the investigation believe that their review of the complaint could bias the investigation or give the appearance of this, they are instructed to inform a designated official at DOJ headquarters. This official may suggest that the complaint be investigated by another DOJ office. Several key issues have arisen that require the courts to interpret various provisions of the law, including (1) when in the criminal justice process CVRA rights apply, (2) what it means for a victim to be "reasonably heard" in court, and (3) what legal standard should be used to review victim appeals of district court decisions. While judicial interpretation of various aspects of a law typically occurs after new legislation is enacted, DOJ and court officials believe that one CVRA issue may benefit from a change to the law itself. The CVRA is not explicit about whether the law applies to victims of local offenses prosecuted in the District of Columbia Superior Court. Without clarification on this issue, judges in this court may continue to differ in whether they apply the CVRA in their cases. As to the overall impacts of the CVRA, the victims as well as the DOJ and judicial officials GAO interviewed had mixed perceptions. Most maintained that CVRA has improved victim treatment. For example, 72 percent of the victim-witness professionals--individuals who are responsible for providing services to crime victims and witnesses--who responded to GAO's survey perceived that the CVRA has resulted in at least some increase in victim attendance at court proceedings. Other officials maintained that the federal government and the courts were already treating victims well prior to the act. Victims responding to GAO's survey also reported mixed views on their knowledge of, and satisfaction with, the provision of various rights. For example, 141 of the 167 victims who responded to GAO's survey question regarding participation in the judicial process reported that they did not attend any of the proceedings related to their cases, primarily because the location of the court was too far to travel or they were not interested in attending.
The shuttle is the only U.S. launch vehicle capable of carrying humans into space. As a result, it will be critical to the space station’s assembly and operation. From December 1997 to June 2002, NASA plans to use the shuttle primarily to transport station components into orbit for assembly. During this period, 27 of the shuttle’s 34 primary payloads are to be station-related. At times, only two of the four shuttles will be available for station assembly. One shuttle—Columbia—cannot provide adequate lift, and, one of the remaining three shuttles will be undergoing scheduled maintenance during some portions of the assembly schedule. Also, most station components will have to be launched in a particular sequence to provide power and structural support for other hardware. In March 1993, the President directed NASA to redesign the space station. The new configuration—renamed the International Space Station—combines the efforts of Europe, Japan, Canada, Russia, and the United States. It also increased the station’s planned orbital inclination to make it more accessible from Russian launch sites, creating the need for additional shuttle lift capacity. Easterly shuttle launches from Kennedy Space Center take advantage of the earth’s normal west to east rotation. Launches to higher inclinations such as those needed for the space station lose some of this advantage, with a resulting loss in lift capability. In November 1993, the space station program manager requested that the space shuttle program implement modifications to provide the increased lift needed to support space station assembly. The shuttle program office responded by committing the program to increasing lift capability by at least 13,000 pounds on every station flight. The lift enhancement plan—first approved in March 1994—has been amended a number of times to introduce new ideas for achieving the required lift at the least cost. The original plan identified 13,000 pounds of added lift at a cost of about $535 million. In May 1995, NASA estimated that about 16,100 pounds of lift gain would be achieved at a cost of about $444 million. Both estimates included some recurring costs for enhancement hardware, as well as costs to integrate the enhancements, and reserves to cover the possible need for additional changes. The current plan includes about 30 individual actions that involve hardware redesign, improved navigational or flight design techniques, and new operational procedures. Figure 1 depicts the percentage of added lift NASA estimates will come from these areas, based on the May 4, 1995, approved baseline. Hardware design changes account for more than one-half of the added lift. The primary redesign program is the development of a new external fuel tank—the super lightweight tank—which involves incorporating a new aluminum alloy into the tank design. This alloy will reduce the tank’s weight and change its material properties. In addition, the tank will have to accommodate a new set of design loads created by the mix of hardware and flight design changes. Other development programs necessary to support the space station include various orbitermodifications and improved main engines. The super lightweight tank program has experienced some early development problems that could affect its performance. Shortly after beginning development of this tank, technical concerns about the properties of the new material were raised. An independent review of the program was performed, and based on its results, NASA adopted a more rigorous test plan for the tank and modified the tank’s production strategy. More recently, the uniqueness of the new metal caused delays in manufacturing a test article. NASA believes these early concerns have been resolved, but it recognizes that uncertainty with the development and manufacturing of the new material could ultimately reduce the amount of lift gain projected for the new tank. The main engine improvements are expected to make the engines heavier than the current engines. However, the new engines are expected to be more efficient, thus needing less propellent. They are also expected to permit occasional use at higher than normal thrust levels. Early test results indicated that the engines would not achieve all of the efficiency originally expected. NASA made additional modifications, and it now expects to achieve most of the originally predicted performance. However, as of May 1995, shuttle program officials still considered the engine development status to represent a threat to the lift gain expected from the enhancement program. An independent shuttle management review team also expressed concerns with these two programs. In its report, the team (1) concluded that the new tank had the potential for problems during development and manufacturing and (2) questioned using the improved engines for increased thrust capability. In addition to hardware redesign, NASA plans to incorporate flight design and operation enhancements. These enhancements include the use of more advanced navigational tools as well as software changes to create a more efficient trajectory. The effect of achieving greater efficiency during ascent is that less propellent would be needed. The most significant operational change involves the deletion of some of the contingency fuel, water, oxygen, and other consumables. NASA protects each mission by ensuring that there are sufficient quantities of consumables to continue the mission in the event of unexpected problems such as difficulties in docking and retrieving payloads. In the past, it has been NASA’s policy to cover nearly every possible contingency. The new policy reduces the amount of consumables by about 4,000 pounds per flight. According to NASA, the revised approach will still ensure that individual unexpected problems can be handled without jeopardizing the mission. However, the reduction in consumables increases the risk of mission failure if a combination of unexpected events occurs. Under the new policy, for example, it might not be possible to perform a second rendezvous with the station, if necessary, and, as a worst case, it could be necessary to jettison a payload before landing. NASA believes the increase in risk is minimal and cites the new policy as a means to reduce weight, increase lift, and save money. In addition, it notes that the maximum reduction in consumables will only be necessary on the heaviest of station flights. According to the program director, this change helped make it possible to terminate two of the more expensive enhancements—development of a lightweight booster and extended motor nozzle—at a savings of about $35 million. To support the first shuttle space station launch, all of the enhancement programs must be integrated and recertified into the shuttle system within a demanding schedule. NASA has developed a systems integration plan identifying the major events and schedules associated with the shuttle enhancement program, as currently approved. The plan describes over 200 individual events related to the development and integration of shuttle lift-increasing modifications. The events began in early calendar year 1994, and they will end with the first space station flight, which is scheduled for December 1997. The single most critical event is the delivery of the super lightweight tank, and, according to the chief engineer of the shuttle integration office, it is on a very success-oriented schedule that has already experienced some delays. While the tank’s critical design review has already been held, the final set of design loads are still being updated. Thus, many design and environment definition activities will occur in parallel. If any of the assumed design loads substantially change, additional certification cycles may have to be conducted. However, there is no schedule or budget margin that allows for major adjustments because the first tank is to be delivered only 2 to 3 months before the first launch. Based on its launch history and projected budget, the shuttle may not be able to meet the demanding launch requirements of the space station’s assembly schedule. To meet the station’s “assembly complete” milestone, shuttle officials have designed a very compressed launch schedule. During certain periods of the station’s assembly, clusters of shuttle flights are scheduled to be launched within very short time frames. The schedule calls for five launches within a 6-month period in fiscal year 2000 and seven flights during a 9-month period in fiscal year 2002. On two other occasions, three launches are scheduled in a 3-month period. This schedule equates to about 1 launch per month, or a rate of up to 12 flights a year for these periods. In addition, on two occasions, the schedule calls for launches of two missions with less than 35 days separating them. While NASA has achieved similar launch rates a few times, it will have fewer processing personnel during the space station era. The space station’s flight rate frequency cannot be met unless the orbiter is processed in 20 to 30 days less than standard. To process the orbiter at this rate, shuttle personnel will have to work overtime. However, according to operations officials, budget constraints could make it difficult to fund overtime. Because the schedule is so compressed at times, there is very little margin for error. According to shuttle and station officials, there is little flexibility in the schedule to meet major contingencies, such as late delivery of station hardware, or technical problems with the orbiters. Between December 1991 and September 1994, 9 of 22 shuttle flights slipped from the planned launch dates established 6 months before launch. The shuttle program maintained its annual flight rate, in part, by launching payloads out of sequence. However, during station assembly, most payloads must be launched in the established sequence. The Shuttle Program Director told us that he recognizes the launch schedule is tight and that if a significant delay occurs with any station flight, subsequent flights are likely to slip also. The shuttle program will be attempting to accomplish the demanding station assembly schedule with fewer resources than were available in the past. For example, to reduce operating costs, NASA has reduced the shuttle processing workforce at Kennedy Space Center by 1,400 people, or 20 percent, since 1992. According to a February 1995 internal workforce review, schedule risk already exists in areas such as engine testing, crew training, and flight software development, and NASA plans further funding cuts in the future. According to shuttle processing officials, NASA will reduce the shuttle processing workforce by another 900 people, or 15 percent, through fiscal year 2000. NASA continues to review all elements of shuttle operations to improve processes and increase efficiency and believes that these savings are achievable. At the time of the fiscal year 1996 budget request, estimated shuttle operations funding requirements exceeded projected budgets by at least 10 percent—a cumulative total of $1.3 billion—in fiscal years 1996 through 2000. Shuttle managers were concerned about their ability to achieve the additional funding cuts needed to meet the projected budgets. In February 1995, independent review teams recommended additional ways to reduce shuttle operations costs. NASA does not have an estimate of savings that may result from implementing the recommendations. According to the Director of Shuttle Management and Operations at Kennedy Space Center, the station’s assembly schedule will slip unless (1) NASA provides additional funds for shuttle operations or (2) more efficiencies are found. Officials in the Office of Space Flight told us that they estimated that there is a medium to high risk that the station’s assembly completion date will slip because of shuttle delays. These officials estimated that the schedule could slip about 4 to 5 months. Their estimate was based on the fact that the shuttle achieved one less flight than planned in 2 of the past 4 years. A recent internal NASA study acknowledges the possibility of a slip in the schedule. According to the April 1995 study conducted for the International Space Station Independent Assessment Office at Johnson Space Center, the shuttle cannot support the planned schedule unless additional launch resources are provided or shuttle processing methods are streamlined. The study identified a possible slip of up to 4 years in completing station assembly due to shuttle processing delays and the relatively low reliability of the Russian Zenit launch vehicle. According to the study, shuttle processing presents the largest schedule risk. To meet the manifest, NASA would have to reduce processing time to 50 percent of current levels. A delay in completing the space station assembly would increase the station’s cost because fixed costs would be continued for a longer period. No reliable estimate of the increased cost exists since the estimate would depend on the length of the delay and assumptions about how long the station would remain operational after assembly is complete. However, when NASA redesigned the station in 1994, officials estimated the redesign would reduce costs by $1.6 billion because it would accelerate the assembly complete date by 15 months, from September 2003 to June 2002. At a minimum, a portion of these savings would be lost if the assembly complete date slips. NASA plans to defer some orbiter recertification activities and forgo testing all of the changes in an integrated fashion. NASA is confident that the maturity of the current system and existing databases from earlier testing are sufficient to justify the current approach. To reduce costs, NASA plans to alter the depth of a previously planned materials review. The review was to have been part of a program to recertify the individual shuttle orbiters after incorporating the performance enhancements for the space station program. It would have provided specific and detailed assurance that every piece of the orbiter structure could safely withstand the aerodynamic environments during space station missions. The space station mission environments are expected to be more stressing than those of previous missions. The purpose of the materials review was to identify and reevaluate those structural components that were previously accepted even though they did not fully conform to design specifications. NASA currently plans to assess the impact of the new environments on these components based on the design rather than the actual hardware. A materials review will be performed on critical structures, according to NASA. NASA officials also told us that they are confident the streamlined recertification program will adequately ensure that the orbiter will perform in all possible station era environments. They noted that the orbiter now has a lengthy flight history record, and the experience gained from those flights ensures that the design changes made to support the space station will be fully understood. In addition, NASA does not plan to perform test firings of the modified propulsion system in an integrated setting. Instead, the agency plans to verify system performance based on individual component testing and predictive analyses. A 1989 study performed for the Stennis Space Center addressed the concept of integrated system testing. The study cited the unpredictability of the “interactive characteristics of the propulsion, structural, and electrical systems” and concluded that propulsion system testing should be considered even in cases of “existing designs modified to accommodate one or more major system redesigns.” The same study noted, however, that the technology base for the shuttle propulsion system is more advanced than for other vehicles, thus mitigating the engineering risks. NASA does not believe integrated system test firings are necessary in this case. Program officials noted that the propulsion system’s design changes do not affect the way in which fluids and propellent are moved throughout the system. As a result, they believe component testing, coupled with inferential analysis and modeling of the whole system, will suffice. In addition, program management officials stated that the costs were too high to justify integrated test firings, given the test results and analyses that would be available without integrated tests. Independent assessments provide objective overviews of complex development programs and space missions and can create an incentive for more rigorous internal review of the program. In establishing an independent group to oversee space station program safety, for example, NASA noted that “engineering products are improved by independent technical peer review,” and that such reviews do not “reflect on the competence, motivations, or integrity” of those responsible for implementing a program. NASA’s recently completed laboratory review also endorsed the concept of independent review in situations where the need has been identified. The report, issued in February 1995, cited the value of being in a position to take a more objective view of issues and details. It also noted that the process of independent assessment requires managers to “review their efforts from a perspective that is hard to maintain in the day-to-day sequence of events.” In the past, NASA has sometimes chartered independent assessments of complex development programs and missions, including assessments of some parts of the performance enhancement program such as the main engine improvement program and the super lightweight tank development. However, NASA has not requested an independent assessment of the integrated shuttle performance enhancement program, even though the integration program is complex—consisting of over 200 scheduled events, involving uncertainties such as characterization of the aerodynamic environments the enhanced shuttle will operate in, and containing departures from previous programmatic strategies. We recommend that the Administrator of NASA establish an independent review team to (1) assess NASA’s systems integration plan for the lift-increasing enhancements, (2) identify the associated technical and programmatic risks, and (3) weigh the costs and benefits of NASA’s tight scheduling of shuttle flights to assemble the space station. In commenting on a draft of this report, NASA concurred with our recommendation and stated that it had initiated implementation. The Aerospace Safety Advisory Panel has agreed to perform the independent reviews. According to NASA, the panel will use expert outside consultants to review the benefits and the technical and scheduling risks considering the current and projected NASA budgets. NASA noted that although the space station assembly schedule was demanding and funding was tight, it was currently on schedule and within budget. NASA’s comments are presented in their entirety in appendix I, along with our evaluation of them. We conducted our review at NASA Headquarters, Marshall Space Flight Center, Johnson Space Center, and Kennedy Space Center. We examined (1) shuttle enhancement documentation, (2) budgetary data, (3) internal and external analyses regarding the shuttle program, (4) shuttle manifests, (5) shuttle processing data, and (6) space station assembly schedules. In addition, we interviewed officials from NASA Headquarters, the shuttle program, and the space station program regarding issues related to NASA’s plan to support space station assembly. These interviews included discussions with representatives of the Astronaut Office at Johnson Space Center. We performed our work between November 1994 and May 1995 in accordance with generally accepted government auditing standards. Unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 15 days from its issue date. At that time, we will send copies of it to the Administrator, NASA; the Director, Office of Management and Budget; and other appropriate congressional committees. We will also provide copies to others upon request. Please contact me on (202) 512-8412 if you or your staff have any questions concerning this report. Major contributors to this report were Lee Edwards, John Gilchrist, and Reginia Grider. The following are our comments on the National Aeronautics and Space Administration’s (NASA) letter dated June 23, 1995. 1. We have incorporated NASA officials informal comments in the text where appropriate. 2. Although the development programs have not experienced significant schedule slips to date, the programs have experienced some early development problems and an independent management review team concluded in February 1995 that the largest of these programs—the super lightweight tank—had the potential for further problems during development and manufacture. As we note on page 5, NASA deleted two expensive hardware programs by substituting operational changes that substantially reduced weight but increased the risk of mission failure. 3. The April 1995 study was intended to identify the shuttle program’s challenge in supporting the station assembly schedule and provide an indication of the possible magnitude of schedule slips. Study officials told us that the conversion from workdays to calendar days or use of available overtime would not substantially change the study results. The study was based on actual timelines experienced since the shuttle returned to flight after the Challenger accident. NASA has not defined the streamlined payload checkout and orbiter processing approaches that it says will be in place beginning in 1998. The impact of streamlining on the shuttle’s launch schedule cannot be determined at this time. Subsequent to commenting on the report, officials ran the study model again, using processing times for only those missions launched in fiscal years 1992 and subsequent, and omitting the two flights with the longest processing times. In this scenario, the model predicted a slip of over 1 year in the station assembly complete milestone, assuming 100 percent reliability and an inflexible assembly sequence. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO examined the extent to which the space shuttle program can support the space station's assembly requirements, focusing on the: (1) impacts of the declining shuttle budget; and (2) demanding schedule to support the space station. GAO found that: (1) the National Aeronautics and Space Administration's (NASA) plans for increasing the shuttle's lift capability are complex and involve about 30 individual actions such as hardware redesigns, improved flight design techniques, and new operational procedures; (2) some of the hardware redesign programs have experienced early development problems, and the potential exists for additional problems; (3) the NASA schedule for meeting the space station's launch requirements appears questionable in the declining budget environment; (4) NASA must successfully complete numerous shuttle-related development programs on a tight schedule to support the first space station launch; (5) the remaining launch schedule is compressed and will be difficult to achieve without additional funding or more efficient processing methods; (6) delays in the launch schedule could substantially increase the station's cost; (7) NASA plans to forgo some of the shuttle's recertification activities and full integration testing of the propulsion system until the first launch of station components; (8) NASA plans to assess the implications of the design changes through a combination of tanking and component tests and systems analyses; and (9) NASA must ensure that the implications of integrating numerous individual design changes are fully understood and safety is not compromised.
The Atomic Energy Act of 1954 authorized a comprehensive regulatory program to permit private industry to develop and apply atomic energy for peaceful uses, such as generating electricity from privately owned nuclear power plants. Soon thereafter, government and industry experts identified a major impediment to accomplishing the act’s objective: the potential for payment of damages resulting from a nuclear accident and the lack of adequate available insurance. Unwilling to risk huge financial liability, private companies viewed even the remote specter of a serious accident as a roadblock to their participating in the development and use of nuclear power. In addition, congressional concern developed over ensuring adequate financial protection to the public because the public had no assurance that it would receive compensation for personal injury or property damages from the liable party in event of a serious accident. Faced with these concerns, the Congress enacted the Price-Anderson Act in September 1957. The Price-Anderson Act has two underlying objectives: (1) to establish a mechanism for compensating the public for personal injury or property damage in the event of a nuclear accident and (2) to encourage the development of nuclear power. To provide financial protection, the Price-Anderson Act requires commercial nuclear reactors to be insured to the maximum level of primary insurance available from private insurers. To implement this provision, NRC periodically revises its regulations to require licensees of nuclear reactors to increase their coverage level as the private insurance market increases the maximum level of primary insurance that it is willing to offer. For example, in January 2003, NRC increased the required coverage from $200 million to the current $300 million, when American Nuclear Insurers informed NRC that $300 million per site in coverage was now available in its insurance pool. In 1975, the Price-Anderson Act was amended to require licensees to pay a pro-rated share of the damages in excess of the primary insurance amount. Under this amendment, each licensee would pay up to $5 million in retrospective premiums per facility it owned per incident if a nuclear accident resulted in damages exceeding the amount of primary insurance coverage. In 1988, the act was further amended to increase the maximum retrospective premium to $63 million per reactor per incident to be adjusted by NRC for inflation. The amendment also limited the maximum annual retrospective premium per reactor to $10 million. Under the act, NRC is to adjust the maximum amount of retrospective premiums every 5 years using the aggregate change in the Consumer Price Index for urban consumers. In August 2003, NRC set the current maximum retrospective payment at $95.8 million per reactor per incident. With 103 operating nuclear power plants, this secondary insurance pool would total about $10 billion. The Price-Anderson Act also provides a process to deal with incidents in which the damages exceed the primary and secondary insurance coverage. Under the act, NRC shall survey the causes and extent of the damage and submit a report on the results to, among others, the Congress and the courts. The courts must determine whether public liability exceeds the liability limits available in the primary insurance and secondary retrospective premiums. Then the President would submit to the Congress an estimate of the financial extent of damages, recommendations for additional sources of funds, and one or more compensation plans for full and prompt compensation for all valid claims. In addition, NRC can request the Congress to appropriate funds. The most serious incident at a U.S. nuclear power plant took place in 1979 at the Three Mile Island Nuclear Station in Pennsylvania. That incident has resulted in $70 million in liability claims. NRC’s regulatory activities include licensing nuclear reactors and overseeing their safe operation. Licensees must meet NRC regulations to obtain and retain their license to operate a nuclear facility. NRC carries out reviews of financial qualifications of reactor licensees when they apply for a license or if the license is transferred, including requiring applicants to demonstrate that they possess or have reasonable assurance of obtaining funds necessary to cover estimated operating costs for the period of the license. NRC does not systematically review its licensees’ financial qualifications once it has issued the license unless it has reason to believe this is necessary. In addition, NRC performs inspections to verify that a licensee’s activities are properly conducted to ensure safe operations in accordance with NRC’s regulations. NRC can issue sanctions to licensees who violate its regulations. These sanctions include notices of violation; civil penalties of up to $100,000 per violation per day; and orders that may modify, suspend, or revoke a license. Thirty-one commercial nuclear power plants nationwide are licensed to limited liability companies. In total, 11 limited liability companies are licensed to own nuclear power plants. Three energy corporations—Exelon, Entergy, and the Constellation Energy Group—are the parent companies for 8 of these limited liability companies. These eight subsidiaries are licensed or co-licensed to operate 27 of the 31 plants. The two subsidiaries of the Exelon Corporation are the licensees for 15 plants and the co- licensees for 4 others. Constellation Energy Group, Inc., and Entergy Corporation are the parent companies of limited liability companies that are licensees for four nuclear power plants each. (See table 1.) Of all the limited liability companies, Exelon Generation Company, LLC, has the largest number of plants. It is the licensee for 12 plants and co- licensee with PSEG Nuclear, LLC, for 4 other plants. For these 4 plants, Exelon Generation owns 43 percent of Salem Nuclear Generating Stations 1 and 2 and 50 percent of Peach Bottom Atomic Power Stations 2 and 3. (App. I lists all the licensees and their nuclear power plants.) NRC requires licensees of nuclear power plants to comply with the Price- Anderson Act’s liability insurance provisions by maintaining the necessary primary and secondary insurance coverage. First, NRC ensures that licensees comply with the primary insurance coverage requirement by requiring them to submit proof of coverage in the amount of $300 million. Second, NRC ensures compliance with the requirement for secondary coverage by accepting the certified copy of the licensee’s bond for payment of retrospective premiums. All the nuclear power plant licensees purchase their primary insurance from American Nuclear Insurers. American Nuclear Insurers sends NRC annual endorsements documenting proof of primary insurance after the licensees have paid their annual premiums. NRC and each licensee also sign an indemnity agreement, stating that the licensee will maintain an insurance policy in the required amount. This agreement, which is in effect as long as the owner is licensed to operate the plant, guarantees reimbursement of liability claims against the licensee in the event of a nuclear incident through the liability insurance. The agency can suspend or revoke the license if a licensee does not maintain the insurance, but according to an NRC official, no licensee has ever failed to pay its annual primary insurance premium and American Nuclear Insurers would notify NRC if a licensee failed to pay. As proof of their secondary insurance coverage, licensees must provide evidence that they are maintaining a guarantee of payment of retrospective premiums. Under NRC regulations, the licensee must provide NRC with evidence that it maintains one of the following six types of guarantees: (1) surety bond, (2) letter of credit, (3) revolving credit/term loan arrangement, (4) maintenance of escrow deposits of government securities, (5) annual certified financial statement showing either that a cash flow can be generated and would be available for payment of retrospective premiums within 3 months after submission of the statement or a cash reserve or combination of these, or (6) such other type of guarantee as may be approved by the Commission. Before the late 1990s, the licensees provided financial statements to NRC as evidence of their ability to pay retrospective premiums. According to NRC officials, in the late 1990s, Entergy asked NRC to accept the bond for payment of retrospective premiums that it had with American Nuclear Insurers as complying with the sixth option under NRC’s regulations: such other type of guarantee as may be approved by the Commission. After reviewing and agreeing to Entergy’s request, NRC decided to accept the bond from all the licensees as meeting NRC’s requirements. NRC officials told us that they did not document this decision with Commission papers or incorporate it into the regulations because they did not view this as necessary under the regulations. The bond for payment of retrospective premiums is a contractual agreement between the licensee and American Nuclear Insurers that obligates the licensee to pay American Nuclear Insurers the retrospective premiums. Each licensee signs this bond and furnishes NRC with a certified copy. In the event that claims exhaust primary coverage, American Nuclear Insurers would collect the retrospective premiums. If a licensee were not to pay its share of these retrospective premiums, American Nuclear Insurers would, under its agreement with the licensees, pay for up to three defaults or up to $30 million in 1 year of the premiums and attempt to collect this amount later from the defaulting licensees. According to an American Nuclear Insurers official, any additional defaults would reduce the amount available for retrospective payments. An American Nuclear Insurers official told us that his organization believes that the bond for payment of retrospective premiums is legally binding and obligates the licensee to pay the premium. Under NRC regulations, if a licensee fails to pay the assessed deferred premium, NRC reserves the right to pay those premiums on behalf of the licensee and recover the amount of such premiums from the licensee. NRC applies the same rules to limited liability companies that it does to other licensees of nuclear power plants with respect to liability requirements under the Price-Anderson Act. All licensees must meet the same requirements regardless of whether they are limited liability companies. American Nuclear Insurers applies an additional requirement for limited liability companies with respect to secondary insurance coverage in order to ensure that they have sufficient assets to pay retrospective premiums. Given the growing number of nuclear power plants licensed to limited liability companies, NRC is examining the need to revise its procedures and regulations for such companies. NRC requires all licensees of nuclear power plants to follow the same regulations and procedures. Limited liability companies, like other licensees, are required to show that they are maintaining the $300 million in primary insurance coverage and provide NRC a copy of the bond for payment of retrospective premiums or other approved evidence of guarantee of retrospective premium payments. According to NRC officials, all its licensees, including those that are limited liability companies, have sufficient assets to cover the retrospective premiums. While NRC does not conduct in-depth financial reviews specifically to determine licensees’ ability to pay retrospective premiums, it reviews the licensees’ financial ability to safely operate their plants and to contribute decommissioning funds for the future retirement of the plants. According to NRC officials, if licensees have the financial resources to cover these two larger expenses, they are likely to be capable of paying their retrospective premiums. American Nuclear Insurers goes further than NRC and requires licensees that are limited liability companies to provide a letter of guarantee from their parent or other affiliated companies with sufficient assets to cover the retrospective premiums. An American Nuclear Insurers official stated that American Nuclear Insurers obtains these letters as a matter of good business practice. These letters state that the parent or an affiliated company is responsible for paying the retrospective premiums if the limited liability company does not. If the parent company or other affiliated company of a limited liability company does not provide a letter of guarantee, American Nuclear Insurers could refuse to issue the bond for payment of retrospective premiums and the company would have to have another means to show NRC proof of secondary insurance. American Nuclear Insurers informs NRC that it has received these letters of guarantee. The official also told us that American Nuclear Insurers believes that the letters from the parent companies or other affiliated companies of the limited liability company licensed by NRC are valid and legally enforceable contracts. NRC officials told us that they were not aware of any problems caused by limited liability companies owning nuclear power plants and that NRC currently does not regard limited liability companies’ ownership of nuclear power plants as a concern. However, because these companies are becoming more prevalent as owners of nuclear power plants, NRC is examining whether it needs to revise any of its regulations or procedures for these licensees. NRC estimates that it will complete its study by the end of summer 2004. We provided a draft of this report to NRC for review and comment. In its written comments (see app. II), NRC stated that it believes the report accurately reflects the present insurance system for nuclear power plants. NRC said that we correctly conclude that the agency does not treat limited liability companies differently than other licensees with respect to Price- Anderson’s insurance requirements. NRC also stated that we are correct in noting that it is not aware of any problems caused by limited liability companies owning nuclear power plants and that NRC currently does not regard limited liability companies’ ownership of nuclear power plants as a concern. In addition, NRC commented that we agree with the agency’s conclusion that all its reactor licensees have sufficient assets that they are likely to be able to pay the retrospective premiums. With respect to this last comment, the report does not take a position on the licensees’ ability to pay the retrospective premiums. We did not evaluate the sufficiency of the individual licensees’ assets to make these payments. Instead, we reviewed NRC’s and the American Nuclear Insurers’ requirements and procedures for retrospective premiums. We performed our review at NRC headquarters in Washington, D.C. We reviewed statutes, regulations, and appropriate guidance as well as interviewed agency officials to determine the relevant statutory framework of the Price-Anderson Act. To determine the number of nuclear power plant licensees that are limited liability companies, we surveyed, through electronic mail, all the NRC project managers responsible for maintaining nuclear power plant licenses. We asked them to provide data on the licensees, including the licensee’s name and whether it was a limited liability company. If it was a limited liability company, we asked when the license was transferred to the limited liability company and who is the parent company of the limited liability company. We received responses for all 103 nuclear power plants currently licensed to operate. We analyzed the results of the survey responses. We verified the reliability of the data from a random sample of project managers by requesting copies of the power plant licenses and then comparing the power plant licenses to the data provided by the project managers. The data agreed in all cases. We concluded that the data were reliable enough for the purposes of this report. To determine NRC’s requirements for ensuring that licensees of nuclear power plants comply with the Price-Anderson Act’s liability requirements, we reviewed relevant statutes and NRC regulations and interviewed NRC officials responsible for ensuring that licensees have primary and secondary insurance coverage. We also spoke with American Nuclear Insurers officials responsible for issuing the insurance coverage to nuclear power plant licensees, and we reviewed relevant documents associated with the insurance. To determine whether and how these procedures differ for licensees that are limited liability companies, we reviewed relevant documents, including NRC regulations, and interviewed NRC officials responsible for ensuring licensee compliance with Price-Anderson Act requirements. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 7 days from the date of this letter. We will then send copies to interested congressional committees; the Commissioners, Nuclear Regulatory Commission; the Director, Office of Management and Budget; and other interested parties. We will make copies available to others on request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, I can be reached at (202) 512-3841. Major contributors to this report include Ray Smith, Ilene Pollack, and Amy Webbink. John Delicath and Judy Pagano also contributed to this report. Entergy Arkansas, Inc. Entergy Arkansas, Inc. FirstEnergy Nuclear Operating Company No Cleveland Electric Illuminating Company No FirstEnergy Nuclear Operating Company No Exelon Generation Company, LLC Exelon Generation Company, LLC Carolina Power & Light Co. North Carolina Eastern Municipal Power Agency Carolina Power & Light Co. Constellation Energy Group, Inc. Constellation Energy Group, Inc. Catawba Nuclear Station 1 North Carolina Electric Membership Saluda River Electric Cooperative, Inc. Catawba Nuclear Station 2 North Carolina Municipal Power Agency AmerGen Energy Company, LLC City of New Smyrna Beach and Utilities Commission Orlando Utilities Commission and City of Orlando Seminole Electric Cooperative, Inc. Pacific Gas and Electric Company Exelon Generation Company, LLC Exelon Generation Company, LLC Entergy Nuclear FitzPatrick, LLC Rochester Gas and Electric Corporation No 39 Grand Gulf Nuclear Station System Energy Resources, Inc. South Mississippi Electric Power Assoc. No Carolina Power & Light Co. Entergy Nuclear Indian Point 2, LLC Entergy Nuclear Indian Point 3, LLC Wisconsin Public Service Corp. Exelon Generation Company, LLC Exelon Generation Company, LLC Exelon Generation Company, LLC Exelon Generation Company, LLC Dominion Nuclear Connecticut, Inc. Dominion Nuclear Connecticut, Inc. Central Vermont Public Service Corporation Massachusetts Municipal Wholesale Electric Co. Entergy Nuclear Generation Co. Exelon Generation Company, LLC Exelon Generation Company, LLC Entergy Gulf States, Inc. Carolina Power & Light Co. FPL Group, Inc. Massachusetts Municipal Wholesale Electric Co. AmerGen Energy Company, LLC Florida Power and Light Company Florida Power and Light Company Entergy Nuclear Vermont Yankee, LLC Entergy Nuclear Operations, Inc. Municipal Electric Authority of Georgia City of Dalton, Georgia Municipal Electric Authority of Georgia City of Dalton, Georgia Entergy Operations, Inc. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
An accident at one the nation's commercial nuclear power plants could result in human health and environmental damages. To ensure that funds would be available to settle liability claims in such cases, the Price-Anderson Act requires licensees for these plants to have primary insurance--currently $300 million per site. The act also requires secondary coverage in the form of retrospective premiums to be contributed by all licensees to cover claims that exceed primary insurance. If these premiums are needed, each licensee's payments are limited to $10 million per year and $95.8 million in total for each of its plants. In recent years, limited liability companies have increasingly become licensees of nuclear power plants, raising concerns about whether these companies--by shielding their parent corporations' assets--will have the financial resources to pay their retrospective premiums. GAO was asked to determine (1) the extent to which limited liability companies are the licensees for U.S. commercial nuclear power plants, (2) the Nuclear Regulatory Commission's (NRC) requirements and procedures for ensuring that licensees of nuclear power plants comply with the Price-Anderson Act's liability requirements, and (3) whether and how these procedures differ for licensees that are limited liability companies. Of the 103 operating nuclear power plants, 31 are owned by 11 limited liability companies. Three energy corporations--Exelon, Entergy, and the Constellation Energy Group--are the parent companies for eight of these limited liability companies. These 8 subsidiaries are the licensees or colicensees for 27 of the 31 plants. NRC requires all licensees for nuclear power plants to show proof that they have the primary and secondary insurance coverage mandated by the Price-Anderson Act. Licensees obtain their primary insurance through American Nuclear Insurers. Licensees also sign an agreement with NRC to keep the insurance in effect. American Nuclear Insurers also has a contractual agreement with each of the licensees to collect the retrospective premiums if these payments become necessary. A certified copy of this agreement, which is called a bond for payment of retrospective premiums, is provided to NRC as proof of secondary insurance. It obligates the licensee to pay the retrospective premiums to American Nuclear Insurers. NRC does not treat limited liability companies differently than other licensees with respect to the Price-Anderson Act's insurance requirements. Like other licensees, limited liability companies must show proof of both primary and secondary insurance coverage. American Nuclear Insurers also requires limited liability companies to provide a letter of guarantee from their parent or other affiliated companies with sufficient assets to pay the retrospective premiums. These letters state that the parent or affiliated companies are responsible for paying the retrospective premiums if the limited liability company does not. American Nuclear Insurers informs NRC it has received these letters. In light of the increasing number of plants owned by limited liability companies, NRC is studying its existing regulations and expects to report on its findings by the end of summer 2004. In commenting on a draft of this report, NRC stated that it accurately reflects the present insurance system for nuclear power plants.
As we have previously reported, DOD began the F-35 acquisition program in October 2001 without adequate knowledge about the aircraft’s critical technologies or design. In addition, DOD’s acquisition strategy called for high levels of concurrency or overlap among development, testing, and production. In our prior work, we have identified the lack of adequate knowledge and high levels of concurrency as major drivers of the significant cost and schedule growth as well as performance shortfalls that the program has experienced since 2001. The program has been restructured three times since it began: first in December 2003, again in March 2007, and most recently in March 2012. The most recent restructuring was initiated in early 2010 when the program’s unit cost estimates exceeded critical thresholds established by statute—a condition known as a Nunn-McCurdy breach. DOD subsequently certified to Congress in June 2010 that the program was essential to national security and needed to continue. DOD then began efforts to significantly restructure the program and establish a new acquisition program baseline. These restructuring efforts continued through 2011 and into 2012, during which time the department increased the program’s cost estimates and extended its testing and delivery schedules. Since then costs have remained relatively stable. Table 1 shows the cost, quantity, and schedule changes from the initial program baseline and the relative stability since the new baseline was established. As the program has been restructured, DOD has also reduced near-term aircraft procurement quantities. From 2001 and through 2007, DOD deferred the procurement of 931 aircraft into the future, and then again from 2007 and through 2012, DOD deferred another 450 aircraft. Figure 1 shows how planned quantities in the near term steadily declined over time. The F-35 is DOD’s most costly acquisition program, and over the last several years we have reported on the affordability challenges facing the program. As we reported in April 2016, the estimated total acquisition cost for the F-35 program was $379 billion, and the program would require an average of $12 billion per year from 2016 through 2038. The program expects to reach peak production rates for U.S. aircraft in 2022, at which point DOD expects to spend more than $14 billion a year on average for a decade (see fig. 2). Given these significant acquisition costs, we found that DOD would likely face affordability challenges as the F-35 program competes with other large acquisition programs, including the B-21 bomber, KC-46A tanker, and Ohio Class submarine replacement. In addition, in September 2014, we reported that DOD’s F-35 sustainment strategy may not be affordable. Through 2016, DOD had awarded contracts for production of 9 lots of F- 35 aircraft, totaling 285 aircraft (217 aircraft for the U.S. and 68 aircraft for international partners or foreign military sales). At the time of this report, the contract for lot 10 had not been signed. In 2013, the Departments of the Navy and the Air Force issued a joint report to the congressional defense committees providing that the Marine Corps and Air Force would field initial operating capabilities in 2015 and 2016, respectively, with aircraft that had limited warfighting capabilities. The Navy did not plan to field its initial operating capability until 2018, after the F-35’s full warfighting capabilities had been developed and tested. These dates represented a delay of 5 to 6 years from the program’s initial baseline. As planned, the Marine Corps and Air Force declared initial operational capability (IOC) in July 2015 and August 2016, respectively. DOD will need more time and money than expected to complete the remaining 10 percent of the F-35 development program. DOD has experienced delays in testing the software and systems that provide warfighting capabilities, known as mission systems, largely because the software has been delivered late to be tested and once delivered has not worked as expected. Program officials have had to regularly divert resources from developing and testing of more advanced software capabilities to address unanticipated problems with prior software versions. These problems have compounded over time, and this past year was no exception. DOD began testing the final block of software— known as block 3F—later than expected, experienced unanticipated problems with the software’s performance, and thus did not complete all mission systems testing it had planned for 2016. As a result, the F-35 program office has noted that more time and money will be needed to complete development. The amount of time and money could vary significantly depending on the program’s ability to complete developmental and operational testing. We estimate that developmental testing could be delayed as much as 12 months, thus delaying the start of initial operational testing, and total development costs could increase by nearly $1.7 billion. In addition, the Navy’s IOC and the program’s full-rate production decision could also be delayed. DOD continues to experience delays in F-35 mission systems testing. Although mission systems testing is about 80 percent complete, the complexity of developing and testing mission systems has been troublesome. For the F-35 program, DOD is developing and fielding mission systems capabilities in software blocks: (1) Block 1, (2) Block 2A, (3) Block 2B, (4) Block 3i, and (5) Block 3F. Each subsequent block builds on the capabilities of the preceding block. Over the last few years, program officials have had to divert resources—personnel and infrastructure—from developing and testing of more advanced software blocks to address unanticipated problems with prior software blocks. Over time, this practice has resulted in compounding delays in mission systems testing. Blocks 1 through 3i are now complete, and the program is currently focused on developing and testing Block 3F, the final software block in the current development program. Figure 3 illustrates the mission systems software blocks being developed for the program, the percentage of test points completed by block, and the build-up to full warfighting capability with Block 3F. Program officials spent some of 2016 addressing problems with Block 3i mission systems unexpectedly shutting down and restarting—an issue known as instability—which delayed Block 3F testing. In early 2016, officials were developing and testing Block 3i concurrently with Block 3F. In order to ensure that the Block 3i instability was addressed in time for the Air Force’s planned IOC in August 2016, officials diverted resources from Block 3F.That decision delayed subsequent testing that had been planned for Block 3F. Further delays resulted from the discovery of instability and functionality problems with Block 3F. To mitigate some schedule delays, program officials implemented a new process to introduce software updates quicker than normal. Although the quick software releases helped to ensure that testing continued, the final planned version of Block 3F, which was originally planned to be released to testing in February 2016, was not released until late November 2016, nearly a 10-month delay. As a result, program officials have identified the need for additional time to complete development. Program officials now project that developmental testing, which was expected to be completed in May, will conclude in October 2017, 5 months later than planned. However, based on our analysis, the program’s projection is optimistic as it does not reflect historical F-35 test data. Program officials believe that going forward they may be able to devote more resources to mission systems testing, which could lead to higher test point completion rates than they have achieved in the past. According to GAO best practices, credible schedule estimates are rooted in historical data. As of November 2016, program officials estimated that the program will need to complete as much as an average of 384 mission systems test points per month in order to finish flight testing by October 2017—a rate that the program has rarely achieved before. Our analysis of historical test point data as of December 2016 indicates that the average test point execution rates are much lower, at 220 mission systems test points per month. In addition, historical averages suggest that test point growth—additions to the overall test points from discovery in flight testing—is much higher than program officials assume, while estimated deletions—test points that are considered no longer required—are lower than assumed. Using the historical F-35 averages, we project that developmental testing may not be completed until May 2018, a 12-month delay from the program’s current plan. Table 2 provides a comparison of the assumptions used to determine delays in developmental testing. Our estimation of delays in completing developmental testing does not include the time it may take to address the significant number of existing deficiencies. The Marine Corps and Air Force declared IOC with limited capability and with several deficiencies. As of October 2016, the program had more than 1,200 open deficiencies, and senior program and test officials deemed 276 of those critical or of significant concern to the military services. Several of the critical deficiencies are related to the aircraft’s communications, data sharing, and target tracking capabilities. Although the final planned version of Block 3F software was released to flight testing in November 2016 and contained all 332 planned warfighting capabilities, not all of those capabilities worked as intended. In accordance with program plans, it was the first time some of the Block 3F capabilities had been tested. According to a recent report by the Director, Operational Test and Evaluation (DOT&E), fixes for less than half of the 276 deficiencies were included in the final planned version of Block 3F software. Prime contractor officials stated that additional software releases will likely be required to address deficiencies identified during the testing of the final planned version of Block 3F software, but they do not yet know how many releases will ultimately be needed. Delays in developmental testing will likely drive delays in current plans to start F-35 initial operational test and evaluation. Program officials have noted that according to their calculations developmental testing will end in October 2017 and initial operational testing will begin in February 2018. However, DOT&E officials, who approve operational test plans, anticipate that the program will more likely start operational testing in late 2018 or early 2019, at the earliest. Figure 4 provides an illustration of the current program schedule and DOT&E’s projected delays. DOT&E’s estimate for the start of initial operational testing is based on the office’s projection that developmental testing will end in July 2018 and that retrofits needed to prepare the aircraft for operational testing will not be completed until late 2018 at the earliest. There are 23 aircraft—many of which are early production aircraft—that require a total of 155 retrofits before they will be ready to begin operational testing. As of January 2017, 20 of those retrofits were not yet under contract, and program officials anticipated some retrofits would be completed in late 2018. To mitigate possible schedule delays, program officials are considering a phased start to operational testing. However, current program test plans require training and preparation activities before initial operational test and evaluation begins. Those activities, as outlined in the test plan, are expected to take approximately 6 months. Changes to this approach would require approval from DOT&E. According to DOT&E officials, however, the program has not yet provided any detailed strategy for implementing a new approach or identified a time frame for revising the test plan. Significant delays in initial operational testing will likely affect two other upcoming program decisions: (1) the Navy’s decision to declare IOC and (2) DOD’s decision to begin full-rate production. In a 2015 report to the congressional defense committees, the Under Secretary of Defense for Acquisition, Technology and Logistics stated that the Navy’s IOC declaration is on track for February 2019 pending completion of initial operational test and evaluation. If initial operational testing does not begin until February 2019 as the DOT&E predicts, the Navy may need to consider postponing its IOC date. Likewise, DOD’s full-rate production decision, currently planned for April 2019, may have to be delayed. According to statute, a program may not proceed beyond low-rate initial production into full-rate production until initial operational test and evaluation is completed and DOT&E has submitted to the Secretary of Defense and the congressional defense committees a report that analyzes the results of operational testing. If testing does not begin until February 2019 and takes 1 year, as expected, DOD will not have the report in time to support a full-rate production decision by April 2019. The current delays in F-35 developmental testing will also result in increased development costs. Based on the program office’s estimate of a 5-month delay in developmental testing, the F-35 program will need an additional $532 million to complete the development contract. According to GAO best practices, credible cost estimates are also rooted in historical data. Using historical contractor cost data from April 2016 to September 2016, we calculated the average monthly cost associated with the development contract. If developmental testing is delayed 12 months, as we estimate, and operational testing is not completed until 2020, as projected by DOT&E, then we estimate that the program could need more than an additional $1.7 billion to complete the F-35 development contract. Similarly, the Cost Assessment and Program Evaluation office within the Office of the Secretary of Defense has estimated that the program will likely need more than $1.1 billion to complete the development contract. In these estimates, the majority of the additional funding would be needed in fiscal year 2018. Specifically, program officials believe that an additional $353.8 million may be needed in fiscal year 2018, while we estimate that they could need more than three times that amount— approximately $1.3 billion—as illustrated in figure 5. The program plans to fund their estimated development program deficit through several means. For example, although the program office 2018 preliminary budget projection reflected a reduction of $81 million in development funding over the next few years, as compared to DOD’s fiscal year 2017 budget request, program officials expect DOD to restore this reduction in its official fiscal year 2018 budget request. In addition, program officials plans to increase the budget request, as compared to their fiscal year 2017 budget request, for development funding in fiscal years 2018, 2019, and 2020 by $451 million and likewise reduce their budget request for procurement funding over those years. To make up for the reduction in requested procurement funding, the program plans to reprogram available procurement funds appropriated in prior fiscal years. Any additional funding beyond $451 million would likely have to come from some other source. Figure 5 compares DOD’s and our estimates for development funding needs from fiscal years 2018 through 2021. As developmental testing is delayed and DOD procures more aircraft every year, concurrency costs—the costs of retrofitting delivered aircraft—increase. For example, from 2015 to 2016, the program experienced a $70 million increase in concurrency costs. This increase was partially driven by the identification of new technical issues found during flight testing that were not previously forecasted, including problems with the F-35C outer-wing structure and F-35B landing gear. Problems such as these have to be fixed on aircraft that have already been procured. Thus far, DOD has procured 285 aircraft and has experienced a total of $1.77 billion in concurrency costs. Although testing is mostly complete, any additional delays will likely result in delays in the incorporation of known fixes, which would increase the number of aircraft that will require retrofits and rework and further increase concurrency costs as more aircraft are procured. According to program officials, most of the retrofits going forward are likely to be software related and thus less costly. However, according to DOD’s current plan, 498 aircraft will be procured by the time initial operational testing is complete. If the completion of operational testing is delayed to 2020, as DOT&E predicts, the number of procured aircraft will increase to 584 as currently planned, making 86 additional aircraft subject to any required retrofits or rework. In fiscal year 2018, F-35 program officials expect to invest more than $1.2 billion to start two efforts while simultaneously facing significant shortfalls in completing the F-35 baseline development program, as discussed above. Specifically, DOD and program officials project that in fiscal year 2018 the program will need over $600 million to begin development of follow-on modernization of the F-35 and more than $650 million to procure economic order quantities (EOQ) of parts to achieve cost savings during procurement. Contracting for EOQ generally refers to the purchase of parts in larger more economically efficient quantities to minimize the cost of these items. DOD officials emphasized that the specific amount of funding needed for these investments could change as the department finalizes its fiscal year 2018 budget request. Regardless, these investments may be premature. Early Block 4 requirements, which represent new capabilities beyond the original requirements, may not be fully informed before DOD plans to solicit proposals from contractors for how they might meet the government’s requirements—a process known as request for proposal (RFP). According to DOD policy, the Development RFP Release Decision Point is the point at which a solid business case is formed for a new development program. Until Block 3F testing is complete, DOD will not have the knowledge it needs to develop and present an executable business case for Block 4, with reliable cost and funding estimates. Due to evolving threats and changing warfighting environments, program officials project that the program will need over $600 million in fiscal year 2018 to award a contract to begin developing new F-35 capabilities, an effort referred to as follow-on modernization. However, the requirements for the first increment of that effort, known as Block 4, have not been finalized. Block 4 is expected to be developed and delivered in four phases—currently referred to as 4.1, 4.2, 4.3, and 4.4. Program officials expect phases 4.1 and 4.3 to be primarily software updates, while 4.2 and 4.4 consist of more significant hardware changes. The program has drafted a set of preliminary requirements for Block 4 that focused on the top-level capabilities needed in phases 4.1 and 4.2, but the requirements for the final two phases have not been fully defined. In addition, as of January 2017, these requirements had not been approved by the Joint Requirements Oversight Council. Delays in developmental testing of Block 3F are also likely to affect Block 4 requirements. DOD policy states that requirements are to be approved before a program reaches the Development RFP Decision Point in the acquisition process. GAO best practices emphasize the importance of matching requirements and resources in a business case before a development program begins. For DOD, the Development RFP Release Decision Point is the point at which plans for the program must be most carefully reviewed to ensure that all requirements have been approved, risks are understood and under control, the program plan is sound, and the program will be affordable and executable. Currently, F-35 program officials plan to release the RFP for Block 4.1 development in the third quarter of fiscal year 2017, nearly 1 year before we estimate Block 3F developmental testing will be completed. Program officials have stated that Block 3F is the foundation for Block 4, but continuing delays in Block 3F testing make it difficult to fully understand Block 3F functionality and its effect on early Block 4 capabilities. If new deficiencies are identified during the remainder of Block 3F testing, the need for new technologies may arise, and DOD may need to review Block 4 requirements again before approving them. In April 2016, we reported that the F-35 program office was considering what it referred to as a block buy contracting approach that we noted had some potential economic benefits but could limit congressional funding flexibility. The program office has since changed its strategy to consist of contracts for EOQ of 2 years’ worth of aircraft parts followed by a separate annual contract for procurement of lot 12 aircraft with annual options for lots 13 and 14 aircraft. Each of these options would be negotiated separately, similar to how DOD currently negotiates contracts. As of January 2017, details of the program office’s EOQ approach were still in flux. In 2015, the program office contracted with RAND Corporation to conduct a study of the potential cost savings associated with several EOQ approaches. According to the results of that study, in order for the government to get the greatest benefit, the aircraft and engine contractors would need to take on risk by investing in EOQ on behalf of the department in fiscal year 2017. Program officials envision that under this arrangement the contractors would be repaid by DOD at a later date. However, as of January 2017, contractors stated they were still negotiating the terms of this arrangement; therefore, the specific costs and benefits remained uncertain. Despite this uncertainty, the program office plans to seek congressional approval to make EOQ purchases and expects to need more than $650 million for that purpose in fiscal year 2018. Program officials believe that this upfront investment would result in a significant savings over the next few years for the U.S. services. However, given the uncertainties around the level of contractor investment, it is not clear whether an investment of more than $650 million, if that is the final amount DOD requests in fiscal year 2018, will be enough to yield significant savings. Regardless, with cost growth and schedule delays facing the F-35 baseline development program, it is unclear whether DOD can afford to fund this effort at this time. According to internal control standards, agencies should communicate with external stakeholders, such as Congress. With a potential investment of this size, particularly in an uncertain budget environment, it is important that program officials finalize the details of this approach before asking for congressional approval and provide Congress with a clear understanding of the associated costs to ensure that funding decisions are fully informed. The F-35 airframe and engine contractors continue to report improved manufacturing efficiency, and program data indicate that reliability and maintainability are improving in some areas. Over the last 5 years, the number of U.S. aircraft produced and delivered by Lockheed Martin has increased, and manufacturing efficiency and quality have improved over time. Similarly, manufacturing efficiency and quality metrics are improving for Pratt & Whitney. Although some engine aircraft reliability and maintainability metrics are not meeting program expectations, there has been progress in some areas, and there is still time for further improvements. Overall the airframe manufacturer, Lockheed Martin, is improving efficiency and product quality. Over the last 5 years, the number of aircraft produced and delivered by Lockheed Martin has increased from 29 aircraft in 2012 to 46 aircraft in 2016. Since 2011, a total of 200 production aircraft have been delivered to DOD and international partners, 46 of which were delivered in 2016. As of January 2017, 142 aircraft were in production, worldwide. As more aircraft are delivered, the number of labor hours needed to manufacture each aircraft declines. Labor hours decreased from 2015 to 2016, indicating production maturity. In addition, instances of production line work done out of sequence remains relatively low, with the exception of an increase at the end of 2016 due to technical issues, such as repairing coolant tube insulation (see app. III). Further, the number of quality defects and total hours spent on scrap, rework, and repair declined in 2016. Although data indicate that airframe manufacturing efficiency and quality continue to improve, supply chain challenges remain. Some suppliers are delivering late and non-conforming parts, resulting in production line inefficiencies and workarounds. For example, in 2016, Lockheed Martin originally planned to deliver 53 aircraft, but quality issues with insulation on the coolant tubes in the fuel tanks resulted in the contractor delivering 46 aircraft. According to Lockheed Martin officials, late deliveries of parts are largely due to late contract awards and supply base capacity. While supplier performance is generally improving, it is important for suppliers to be prepared for both production and sustainment support going forward. Inefficiencies, such as conducting production line work out of sequence, could be exacerbated if late delivery of parts continues as production more than doubles over the next 5 years. The engine manufacturer, Pratt & Whitney, is also improving efficiency. As of October 2016, Pratt & Whitney had delivered 279 engines. The labor hours required to assemble an F-35 engine decreased quickly and has remained relatively steady since around the 70th engine produced, and little additional efficiency is expected to be gained. Other Pratt & Whitney manufacturing metrics indicate that production efficiency and quality are improving. Scrap, rework, and repair costs were reduced from 2.22 percent in 2015 to 1.8 percent in 2016. We previously reported that according to Pratt & Whitney officials, moving from a hollow blade design to a solid blade would reduce scrap and rework costs because it is easier to produce. However, Pratt & Whitney experienced unanticipated problems with cracking in the solid blade design. As a result, Pratt & Whitney is continuing to produce a hollow blade while it further investigates the difficulty and costs associated with a solid blade design. Pratt & Whitney’s supply chain continues to make some improvements. For example, critical parts are being delivered ahead of schedule, and some are already achieving 2017 rate requirements. To further ensure that suppliers are capable of handling full-rate production, Pratt & Whitney is pursuing the potential to have multiple suppliers for some engine parts, which officials believe will help increase manufacturing capacity within the supply chain. Although the program has made progress in improving system-level reliability and maintainability, some metrics continue to fall short of program expectations in several key areas. For example, as shown in figure 6, while metrics in most areas were overall trending in the right direction, the F-35 program office’s internal assessment indicated that as of August 2016 the F-35 fleet was falling short of reliability and maintainability expectations in 11 of 21 areas. Although many of the metrics remain below program expectations, some of the metrics have shown improvement over the last year, and time remains for continued improvements. For example, our analysis indicates that since 2015, the F-35A reliability has improved from 4.3 mean flight hours between failure attributable to design issues to 5.7 hours, nearly achieving the goal at system maturity of 6 hours. The F-35A mean flight hours between maintenance event metric has also improved and is now meeting program expectations. As of August 2016, the F-35 fleet had only flown a cumulative total of 63,187 flight hours. The program has time for further improvement as the ultimate goals for these reliability and maintainability metrics are to be achieved by full system maturity, or 200,000 cumulative flight hours across the fleet. The program also plans to improve these metrics through additional design changes. Engine reliability varied in 2016. In April 2016, we reported that Pratt & Whitney had implemented a number of design changes that resulted in significant improvements to one reliability metric: mean flight hours between failure attributable to design issues. At the time of our report, contractor data indicated the F-35A and F-35B engines were at about 55 percent and 63 percent, respectively, of where the program expected them to be. According to contractor data as of September 2016, the program was unable to achieve a significant increase in reliability over the last year, which left the F-35A and F-35B engines further below expectations—at about 43 percent and 41 percent, respectively. Other reliability metrics such as engine’s impact on aircraft availability, engine maintenance man-hours, and the time between engine removals are meeting expectations. On average, from June 2016 through November 2016, the engine affected only about 1.47 percent of the overall aircraft availability rates, and none of the top 30 drivers affecting aircraft availability were related to the engine. According to Pratt & Whitney officials, the F-35 engine requires fewer maintenance man-hours per flight hour than legacy aircraft, and engines for the F-35A and F-35B are currently performing better than required for the average number of flight hours between engine removals. Program and contractor officials continue to identify ways to further improve reliability through a number of design changes and expect reliability to continue to improve lot over lot. As the F-35 program approaches the end of development, its schedule and cost estimates are optimistic. The program’s cost and schedule estimates to complete development are hundreds of millions of dollars below and several months under other independent estimates, including our own. If the program experiences schedule delays as we predict, it could require a total of nearly $1.5 billion in fiscal year 2018 alone. However, program officials project that the program will only need $576.2 million in fiscal year 2018 to complete baseline development. At the same time, program officials expect that more than $1.2 billion could be needed to commit to Block 4 and EOQ in fiscal year 2018. DOD must prioritize funding for the baseline development program over the program office’s desire for EOQ and Block 4. If baseline development is not prioritized and adequately funded, and costs increase as predicted by GAO and others, then the program will have less recourse for action and development could be further delayed. In addition, with baseline development still ongoing the program will not likely have the knowledge it needs to present a sound business case for soliciting contractor proposals for Block 4 development in fiscal year 2017. Although Block 4 and EOQ may be desirable, prioritizing funding for these efforts may not be essential at this time. Prioritizing funding for baseline development over these two efforts would ensure that the program has the time and money needed to properly finish development and thus lay a solid knowledge-based foundation for future efforts. To ensure that DOD adequately prioritizes its resources to finish F-35 baseline development and delivers all of the promised warfighting capabilities and that Congress is fully informed when making fiscal year 2018 budget decisions, we are making the following three recommendations to the F-35 program office through the Secretary of Defense. 1. Reassess the additional cost and time needed to complete developmental testing using historical program data. 2. Delay the issuance of the Block 4 development request for proposals at least until developmental testing is complete and all associated capabilities have been verified to work as intended. 3. Finalize the details of DOD and contractor investments associated with an EOQ purchase in fiscal year 2018, and submit a report to Congress with the fiscal year 2018 budget request that clearly identifies the details, including costs and benefits of the finalized EOQ approach. DOD provided us with written comments on a draft of this report. DOD’s comments are reprinted in appendix IV and summarized below. DOD also provided technical comments, which were incorporated as appropriate. DOD did not concur with our recommendation to reassess the additional cost and time needed to complete developmental testing using historical program data. DOD stated that it will continue to assess the assumptions and decisions made, and communicate any necessary adjustments relative to both cost and time needed to complete developmental testing. DOD also stated that it had considered historical data in its assessment and concluded that developmental testing could extend into February 2018. While this possible slip is noted in our report, it is unclear to us the extent to which the data underpinning DOD’s assessment reflected the program’s historical averages. While the program’s analysis that we examined did reflect test point accomplishment rates that were more aligned with what the program achieved in 2016 (i.e. around 290 points per month) those rates were still higher than the historical average. Other key inputs to that analysis also differed significantly from the program’s historical averages. For example, program officials assumed only a 42 percent test point growth rate when the program’s historical average test point growth was 63 percent, and in 2016 alone the test point growth rate was 115 percent. Several other DOD officials have identified possible delays beyond February 2018. In a memo sent to Congress in December 2016, the Under Secretary of Defense for Acquisition, Technology and Logistics stated that developmental testing could go as long as May 2018, and DOT&E analysis also indicates that developmental testing may not conclude until mid-2018. We continue to believe that our recommendation is valid. DOD also did not concur with our recommendation to delay the issuance of the Block 4 development request for proposals until developmental testing is complete. According to DOD, delaying the request for proposals could unnecessarily delay delivery of needed capabilities to the warfighters. However, as program officials stated, Block 3F software establishes the foundation for Block 4. Therefore, continuing delays in Block 3F testing will likely make it difficult to fully understand Block 3F functionality and its effect on early Block 4 requirements. If new deficiencies are identified during the remainder of Block 3F testing, the need for new technologies may arise, and DOD may need to review Block 4 requirements again before approving them which could lead to additional delays. Therefore, we continue to believe that our recommendation is valid. DOD stated that it partially concurred with our third recommendation to finalize the details of investments associated with an EOQ purchase in fiscal year 2018, and submit a report to Congress with the fiscal year 2018 budget request that clearly identifies those details. However, in its response, the department outlined steps that address it. For example, DOD stated that it had finalized the details of DOD and contractor investments associated with an EOQ purchase and will brief Congress on the details, including costs and benefits of the finalized EOQ approach. We are sending copies of this report to the appropriate congressional committees; the Secretary of Defense; and the Under Secretary of Defense for Acquisition, Technology and Logistics. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. To assess the F-35 program’s remaining development and testing we interviewed officials from the program office and contractors—Lockheed Martin and Pratt & Whitney. We obtained and analyzed data on mission systems test point execution, both planned and accomplished from 2011 through 2016 to calculate historical test point averages per month. We compared test progress against the total program requirements to determine the number of test points that were completed and remaining as of December 2016. We used the average test point rate based on the historical data to determine the number of months needed to complete the remaining test points. To identify the program’s average monthly costs, we analyzed contractor cost performance data from April 2016 through September 2016 to identify average contract costs per month. Using a 12-month delay and the average contract costs per month, we calculated the costs to complete developmental testing. In order to determine costs to complete development, we first determined the percent change, year to year, in the program office’s development funding requirement estimate from 2018 to 2021. We then reduced our estimate using those percentages from 2018 to 2021. We discussed key aspects of F-35 development progress, including flight testing progress, with program management and contractor officials as well as DOD test officials and program test pilots. To assess the reliability of the test and cost data, we reviewed the supporting documentation and discussed the development of the data with DOD officials instrumental in producing them. In addition, we interviewed officials from the F-35 program office, Lockheed Martin, Pratt & Whitney, and the Director, Operational Test and Evaluation office to discuss development test plans, achievements, and test discoveries. To assess DOD’s proposed plans for future F-35 investments, we discussed cost and manufacturing efficiency initiatives, such as the economic order quantities approach, with contractor and program office officials to understand potential cost savings and plans. To assess the program’s follow-on modernization plans, we discussed the program’s plans with program office officials. We reviewed the fiscal year 2017 budget request to identify costs associated with the effort. We also reviewed and analyzed best practices identified by GAO and reviewed relevant DOD policies and statutes. We compared the acquisition plans to these policies and practices. To assess ongoing manufacturing and supply chain performance, we obtained and analyzed data related to aircraft delivery rates and work performance data from January 2016 to December 2016. These data were compared to program objectives identified in these areas and used to identify trends. We reviewed data and briefings provided by the program office, Lockheed Martin, Pratt & Whitney, and the Defense Contract Management Agency in order to identify issues in manufacturing processes. We discussed reasons for delivery delays and plans for improvement with Lockheed Martin and Pratt & Whitney. We collected and analyzed data related to aircraft quality through December 2016. We collected and analyzed supply chain performance data and discussed steps taken to improve quality and deliveries with Lockheed Martin and Pratt & Whitney. We also analyzed reliability and maintainability data and discussed these issues with program and contractor officials. We assessed the reliability of DOD and contractor data by reviewing existing information about the data and interviewing agency officials knowledgeable about the data. We determined that the data were sufficiently reliable for the purposes of this report. We conducted this performance audit from June 2016 to April 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. As developmental testing nears completion, the F-35 program continues to address technical risks. The program has incorporated design changes that appear to have mitigated several of the technical risks that we have highlighted in prior reports, including problems with the arresting hook system and bulkhead cracks on the F-35B. However, over the past year, the program continued to address risks with the Helmet Mounted Display, Autonomic Logistics Information System (ALIS), the ejection seat and engine seal that we have identified in the past. The program also identified new risks with the F-35C wing structure and catapult launches, and coolant tube insulation. The status of the Department of Defense’s (DOD) efforts to address these issues is as follows: Helmet Mounted Display: A new helmet intended to address shortfalls in night vision capability, among other things, was developed and delivered to the program in 2015. Developmental testing of the new helmet is mostly complete, and officials believe that issues such as latency and jitter have been addressed. Green glow, although improved, continues to add workload for the pilots when landing at sea. Officials believe that they have done as much as they can to fix the green glow problems with the hardware currently available. ALIS: ALIS continues to lack required capabilities; for instance, engine parts information is not included in the current version of ALIS, although it is expected to be completed in the spring of 2017. In 2016, officials began testing ALIS in an operational environment which has led to some improvements. However, capabilities, including the prognostics health management downlink, have been deferred to follow-on modernization. In 2016, officials acknowledged compounding development delays and restructured the development schedule for ALIS. The new schedule shows that some capabilities that were planned in the earlier versions of ALIS will now be deferred to later versions. In April 2016, we reported that F-35 pilots and maintainers identified potential functionality risks to ALIS and that DOD lacked a plan to address these risks as key milestone dates approached, which could result in operational and schedule risks. Engine seal: Officials have identified a design change to address the technical problem that resulted in an engine fire in June 2014. This design change was validated and incorporated into production in 2015. Engine contractor officials identified 194 engines that needed to be retrofitted, and as of October 2016, 189 of those retrofits had been completed. The engine contractor, Pratt & Whitney, is paying for these retrofits. Ejection seat: In 2015, officials discovered that pilots who weigh less than 136 pounds could possibly suffer neck injuries during ejection. Officials stated that the risk of injury is due to the over-rotation of the ejection seat in combination with the thrust from the parachute deployment during ejection. Officials noted that although the problem was discovered during testing of the new Helmet Mounted Display, the helmet’s weight was not the root cause. The program has explored a number of solutions to ensure pilot safety including installing a switch for light-weight pilots that would slow the release of the parachute deployment, installing a head support panel that would reduce head movement, and reducing the weight of the helmet. The final design completed qualification testing in 2016 and is expected to be incorporated into production lot 10. The cost of these changes has not yet been determined. F-35C outer-wings: In 2016, officials identified structural issues on the F- 35C outer-wing when carrying an AIM-9X missile. In order to resume the test program, officials identified a design change to include strengthening the wings’ material that was incorporated onto a test aircraft. Officials expect to incorporate retrofits to delivered aircraft by 2019 and will incorporate changes into production in lot 10. F-35C catapult launches: In 2016, officials identified issues with violent, uncomfortable, and distracting movement during catapult launches. Specifically, officials stated that the nose gear strut moves up and down as an aircraft accelerates to takeoff, which can cause neck and jaw soreness for the pilot because the helmet and oxygen mask are pushed back on the pilot’s face during take-off. This can be a safety risk as the helmet can hit the canopy, possibly resulting in damage, and flight critical symbology on the helmet can become difficult to read during and immediately after launch due to the rotation of the helmet on the pilot’s head. Officials evaluated several options for adjusting the nose gear to alleviate the issue, but determined that none of the options would significantly affect the forces felt by the pilot. Officials subsequently assembled a team to identify a root cause and a redesign. According to officials, adjustments to the catapult system load settings are being considered to address this issue, and a design change to the aircraft may not be required. But flight testing of the proposed changes is required to confirm this solution. Insulation on coolant tubes: During maintenance on an aircraft in 2016, officials found that insulation around coolant tubes within the aircraft’s fuel system were cracking and contaminating the fuel lines. According to officials, the problem was a result of a supplier using the incorrect material for insulation. The faulty insulation was installed on 57 aircraft— including the entire Air Force initial operational capability fleet—which were prohibited from flight until the insulation was removed. Officials determined that the insulation would not need to be replaced as the aircraft meets specifications without it. Officials are considering removing the insulation from the tubes across the rest of the aircraft going forward. As of January 2017, all of the fielded aircraft have been repaired and returned to flight. In addition to the contact named above, the following staff members made key contributions to this report: Travis Masters (Assistant Director), Emily Bond, Raj Chitikila, Kristine Hassinger, Karen Richey, Jillena Roberts, Megan Setser, Hai Tran, and Robin Wilson. F-35 Joint Strike Fighter: Continued Oversight Needed as Program Plans to Begin Development of New Capabilities. GAO-16-390. Washington, D.C.: April 14, 2016. F-35 Sustainment: DOD Needs a Plan to Address Risks Related to Its Central Logistics System. GAO-16-439. Washington, D.C.: April 14, 2016. F-35 Joint Strike Fighter: Preliminary Observations on Program Progress. GAO-16-489T. Washington, D.C.: March 23, 2016. F-35 Joint Strike Fighter: Assessment Needed to Address Affordability Challenges. GAO-15-364. Washington, D.C.: April 14, 2015. F-35 Sustainment: Need for Affordable Strategy, Greater Attention to Risks, and Improved Cost Estimates. GAO-14-778. Washington, D.C.: September 23, 2014. F-35 Joint Strike Fighter: Slower Than Expected Progress in Software Testing May Limit Initial Warfighting Capabilities. GAO-14-468T. Washington, D.C.: March 26, 2014. F-35 Joint Strike Fighter: Problems Completing Software Testing May Hinder Delivery of Expected Warfighting Capabilities. GAO-14-322. Washington, D.C.: March 24, 2014. F-35 Joint Strike Fighter: Restructuring Has Improved the Program, but Affordability Challenges and Other Risks Remain. GAO-13-690T. Washington, D.C.: June 19, 2013. F-35 Joint Strike Fighter: Program Has Improved in Some Areas, but Affordability Challenges and Other Risks Remain. GAO-13-500T. Washington, D.C.: April 17, 2013. F-35 Joint Strike Fighter: Current Outlook Is Improved, but Long-Term Affordability Is a Major Concern. GAO-13-309. Washington, D.C.: March 11, 2013. Fighter Aircraft: Better Cost Estimates Needed for Extending the Service Life of Selected F-16s and F/A-18s. GAO-13-51. Washington, D.C.: November 15, 2012. Joint Strike Fighter: DOD Actions Needed to Further Enhance Restructuring and Address Affordability Risks. GAO-12-437. Washington, D.C.: June 14, 2012. Defense Acquisitions: Assessments of Selected Weapon Programs. GAO-12-400SP. Washington, D.C.: March 29, 2012. Joint Strike Fighter: Restructuring Added Resources and Reduced Risk, but Concurrency Is Still a Major Concern. GAO-12-525T. Washington, D.C.: March 20, 2012. Joint Strike Fighter: Implications of Program Restructuring and Other Recent Developments on Key Aspects of DOD’s Prior Alternate Engine Analyses. GAO-11-903R. Washington, D.C.: September 14, 2011. Joint Strike Fighter: Restructuring Places Program on Firmer Footing, but Progress Is Still Lagging. GAO-11-677T. Washington, D.C.: May 19, 2011. Joint Strike Fighter: Restructuring Places Program on Firmer Footing, but Progress Still Lags. GAO-11-325. Washington, D.C.: April 7, 2011. Joint Strike Fighter: Restructuring Should Improve Outcomes, but Progress Is Still Lagging Overall. GAO-11-450T. Washington, D.C.: March 15, 2011.
The F-35 Joint Strike Fighter is DOD's most expensive and ambitious acquisition program. Acquisition costs alone are estimated at nearly $400 billion, and beginning in 2022, DOD expects to spend more than $14 billion a year on average for a decade. The National Defense Authorization Act for Fiscal Year 2015 included a provision for GAO to review the F-35 acquisition program annually until the program reaches full-rate production. This, GAO's second report in response to that mandate, assesses, among other objectives, (1) progress of remaining program development and testing and (2) proposed future plans for acquisition investments. To conduct this work, GAO reviewed and analyzed management reports and historical test data; discussed key aspects of F-35 development with program management and contractor officials; and compared acquisition plans to DOD policy and GAO acquisition best practices. Cascading F-35 testing delays could cost the Department of Defense (DOD) over a billion dollars more than currently budgeted to complete development of the F-35 baseline program. Because of problems with the mission systems software, known as Block 3F, program officials optimistically estimate that the program will need an additional 5 months to complete developmental testing. According to best practices, credible estimates are rooted in historical data. The program's projections are based on anticipated test point achievements and not historical data. GAO's analysis—based on historical F-35 flight test data—indicates that developmental testing could take an additional 12 months (see table below). These delays could affect the start of the F-35's initial operational test and evaluation, postpone the Navy's initial operational capability, and delay the program's full rate production decision, currently planned for April 2019. Program officials estimate that a delay of 5 months will contribute to a total increase of $532 million to complete development. The longer delay estimated by GAO will likely contribute to an increase of more than $1.7 billion, approximately $1.3 billion of which will be needed in fiscal year 2018. Meanwhile, program officials project the program will need over $1.2 billion in fiscal year 2018 to start two efforts. First, DOD expects it will need over $600 million for follow-on modernization (known as Block 4). F-35 program officials plan to release a request for Block 4 development proposals nearly 1 year before GAO estimates that Block 3F—the last block of software for the F-35 baseline program—developmental testing will be completed. DOD policy and GAO best practices state that requirements should be approved and a sound business case formed before requesting development proposals from contractors. Until Block 3F testing is complete, DOD will not have the knowledge it needs to present a sound business case for Block 4. Second, the program may ask Congress for more than $650 million in fiscal year 2018 to procure economic order quantities—bulk quantities. However, as of January 2017 the details of this plan were unclear because DOD's 2018 budget was not final and negotiations with the contractors were ongoing. According to internal controls, agencies should communicate with Congress, otherwise it may not have the information it needs to make a fully informed budget decision for fiscal year 2018. Completing Block 3F development is essential for a sound business case and warrants funding priority over Block 4 and economic order quantities at this time. GAO recommends that DOD use historical data to reassess the cost of completing development of Block 3F, complete Block 3F testing before soliciting contractor proposals for Block 4 development, and identify for Congress the cost and benefits associated with procuring economic order quantities of parts. DOD did not concur with the first two recommendations and partially concurred with the third while outlining actions to address it. GAO continues to believe its recommendations are valid, as discussed in the report.
The federal government’s response to major disasters and emergencies in the United States is guided by the Department of Homeland Security’s National Response Framework. The framework is based on a tiered, graduated response; that is, incidents are managed at the lowest jurisdictional level and supported by additional higher-tiered response capabilities as needed. Overall coordination of federal incident- management activities is generally the responsibility of the Department of Homeland Security. Within the Department of Homeland Security, FEMA is responsible for coordinating and integrating the preparedness of federal, state, local, and nongovernmental entities. In this capacity, FEMA engages in a range of planning efforts to prepare for and mitigate the effects of major disasters and emergencies. For example, FEMA is currently developing regional all-hazards and incident-specific plans intended to cover the full spectrum of hazards, including those that are more likely to occur in each region. FEMA expects to complete its current regional planning cycle by 2018. Local and county governments respond to emergencies daily using their own capabilities and rely on mutual aid and other types of assistance agreements with neighboring governments when they need additional resources. For example, county and local authorities are likely to have the capabilities needed to adequately respond to a small-scale incident, such as a local factory explosion, and therefore would not request additional resources. For larger-scale incidents, when resources are overwhelmed, local and county governments will request assistance from the state. States have resources, such as the National Guard of each state,that they can marshal to help communities respond and recover. If additional capabilities are required, states may request assistance from one another through interstate mutual aid agreements, such as the Emergency Management Assistance Compact, or the governors can seek federal assistance. Various federal agencies play lead or supporting roles in responding to major disasters and emergencies, based on their authorities and capabilities, and the nature of the incident when federal assistance is required. For example, the Department of Energy is the lead federal agency for the reestablishment of damaged energy systems and components, and may provide technical expertise during an incident involving radiological and nuclear materials. DOD supports the lead federal agency in responding to major disasters and emergencies when (1) state, local, and other federal capabilities are overwhelmed, or unique defense capabilities are required; (2) it is directed to do so by the President or the Secretary of Defense; or (3) assistance is requested by the lead federal agency. When deciding whether to commit defense resources to a request for assistance by the lead federal agency, DOD evaluates the request against six criteria: legality, lethality, risk, cost, readiness, and appropriateness of the circumstances. A number of DOD organizations have roles in planning for and responding to major disasters and emergencies. The Assistant Secretary of Defense for Homeland Defense and Americas’ Security Affairs: The Assistant Secretary of Defense for Homeland Defense and Americas’ Security Affairs serves as the principal civilian advisor to the Secretary of Defense on civil support issues. The Joint Staff: The Joint Staff coordinates with NORTHCOM and PACOM to ensure that civil support planning efforts are compatible with the department’s war planning and advises the military services on the department’s policy, training, and joint exercise development. Combatant commands: NORTHCOM and PACOM are responsible for carrying out the department’s civil support mission, and have command and control authority depending on the location. The NORTHCOM area of responsibility for civil support is comprised of the contiguous 48 states, Alaska, and the District of Columbia. Outside of this area, NORTHCOM may also support civil authorities’ major disaster and emergency response operations in the Commonwealth of Puerto Rico and the U.S. Virgin Islands. PACOM has these responsibilities for the Hawaiian Islands and U.S. territories in the Pacific. Other Defense Organizations: Other DOD organizations, such as the Army Corps of Engineers, the National Geospatial Intelligence Agency, and the Defense Logistics Agency, support FEMA during major disasters and emergencies by providing power generation capabilities, fuel, and logistics support as lead of several emergency support functions cited in the National Response Framework. The Army Corps of Engineers in particular serves as the lead for Emergency Support Function 3, Public Works and Engineering. National Guard Bureau: The National Guard Bureau serves as the channel of communications on all matters relating to the National Guard between DOD and the States. In the aftermath of Hurricane Katrina, NORTHCOM assigned a defense coordinating officer with associated support staff (known as a defense coordinating element) in each of FEMA’s 10 regional offices. Defense coordinating officers are senior-level military officers with joint service experience, and training on the National Response Framework and the Department of Homeland Security’s National Incident Management System. Defense coordinating officers work closely with federal, state, and local officials to determine DOD’s understanding of what additional or unique capabilities DOD can provide to mitigate the effects of a major disaster or emergency. Figure 1 shows the 10 FEMA regions. According to DOD officials, dual-status commanders—active duty military or National Guard officers who coordinate state and federal responses to civil support incidents and events—have been used for select planned and special events since 2004, and more recently for civil support incidents. The dual-status commander construct provides the intermediate link between the federal and the state chains of command and is intended to promote unity of effort between federal and state forces to facilitate a rapid response to save lives, prevent human suffering, and protect property during major disasters and emergencies. The Secretary of Defense must authorize, and the Governor must consent to, designation of an officer to serve as a dual-status commander. During Hurricane Sandy, dual-status commanders served in New York, New Jersey, Maryland, Massachusetts, New Hampshire, and Rhode Island. The National Defense Authorization Act for Fiscal Year 2012 provided that a dual-status commander should be the usual and customary command and control arrangement in situations when the armed forces and National Guard are employed simultaneously in support of civil authorities, including major disasters and emergencies. When serving in a title 32 or state active duty status, the National Guard of a state is under the command and control of the state’s governor. DOD and National Guard personnel serving on federal active duty, sometimes referred to as being in Title 10 status, are under the command and control of the President and the Secretary of Defense. Dual-status commanders operate in both statuses simultaneously and report to both chains of command. Command and control refers to the exercise of authority and direction by a properly designated commander over assigned forces in the accomplishment of the mission. NORTHCOM and PACOM are updating their existing civil support plans to include a complex catastrophe, as directed, but the plans will not identify capabilities needed to execute their plans that could be provided to execute the plans, as required, until FEMA completes its regional planning efforts in 2018. In the interim, combatant command officials have not determined how they will incorporate into their civil support plans regional capability information from those FEMA regions that have completed their plans. NORTHCOM and PACOM are updating their civil support plans to include a complex catastrophe. However, the commands are delaying the identification of capabilities needed to execute the plans, as required by the Joint Staff, until FEMA completes its regional planning efforts. The Secretary of Defense’s July 2012 memorandum directed NORTHCOM and PACOM to update their civil support plans—to include preparing for a complex catastrophe—by September 2013 and September 2014, respectively. In September 2012, the Joint Staff issued more specific guidance to the commands; directing them to, among other things, identify within the civil support plans required DOD forces and capabilities for responding to a complex catastrophe by the September 2013 and September 2014 deadlines. NORTHCOM officials told us that they expect the command to update its civil support plan by September 2014, and that the plan would describe some general strategic-level complex catastrophe scenarios and identify general force requirements, such as the types of military units that would be needed to respond to a complex catastrophe. However, according to NORTHCOM officials, the command will not identify DOD capabilities that could be provided to civil authorities during a complex catastrophe until FEMA completes its plans. According to PACOM officials, PACOM also expects to update its civil support plan by September 2014. These officials told us that PACOM’s plan will describe a complex catastrophe scenario that begins with an infectious disease, followed by a typhoon that leads to an earthquake that triggers a tsunami. PACOM also plans to identify critical infrastructure likely to be impacted by this scenario. However, officials stated that PACOM’s civil support plan will not identify capability needed to execute the plan, despite the requirement specified in the Joint Staff’s planning guidance. Rather, NORTHCOM and PACOM plan to continue to work with FEMA to identify those DOD’s capabilities that could be provided to respond to a complex catastrophe and include them in subsequent versions of the civil support plans once FEMA has completed its plans for each of the 10 FEMA regions during the next few years. According to FEMA officials, DOD’s civil support concept plans are intended to be coordinated with FEMA’s regional all-hazards and incident- specific plans but these plans are not scheduled to be completed until 2018. FEMA is currently working with each of its regions to update both all-hazards and incident-specific plans, which are updated every 5 years. FEMA’s all-hazards plans are intended to cover the spectrum of hazards, including accidents; natural disasters; terrorist attacks; and chemical, biological, nuclear, and radiological events. Incident-specific plans are intended to address those specific hazards that are believed to have a greater probability of occurring in a region when compared to other types of hazards and have unique response requirements. Each FEMA region has a collaborative team that is responsible for developing a regional all- hazards plan that details capabilities required at the regional level for supporting emergency response. While FEMA’s current efforts to develop regional plans are not scheduled to be completed until 2018, FEMA officials told us that their process to develop and update incident-specific plans is ongoing as needs arise in the regions. As of August 2013, half of the 10 FEMA regions had completed updating their all-hazards plan, and none of the 10 FEMA regions had completed updating their incident-specific plans. According to NORTHCOM officials, these FEMA regional plans are intended to, among other things, inform DOD of the local and state-level capabilities available for responding to a complex catastrophe in each FEMA region, as well as any capability gaps that might ultimately have to be filled by DOD or another federal agency. DOD’s defense coordinating officers have taken some initial steps to coordinate with FEMA; however, NORTHCOM, which is responsible for a majority of the civil support mission for DOD, has not determined how it will incorporate information produced by these efforts into its civil support plan. DOD has defense coordinating officers in each of FEMA’s 10 regions who work closely with federal, state, and local officials to determine what specific capabilities DOD can provide to mitigate the effects of major disasters and emergencies when FEMA requests assistance. Defense coordinating officers are senior-level military officers with joint service experience, and training on the National Response Framework and the Department of Homeland Security’s National Incident Management System. Currently they are coordinating with FEMA and other federal, state, and local agencies to determine regional and state capability requirements for a complex catastrophe in each of the regions. For example, the defense coordinating officer in FEMA Region IX, one of the regions that has completed its all-hazards plan, has helped the region develop bundled mission assignments for its regional plan that pre- identify a group of capabilities the region will require from DOD for a complex catastrophe to fill an identified capability gap, such as aircraft, communications, medical, and mortuary for responding to an earthquake in southern California. The bundled mission assignments are specific to the region’s plans and are intended to expedite the process of preparing a request for assistance so that DOD can deliver the required capabilities more quickly. Similarly, within FEMA Region IV, which has also completed its all-hazards plan, the defense coordinating officer has helped to develop a list of specific response capabilities that DOD can plan to provide to civil authorities when needed. FEMA and the defense coordinating officers are exploring the possibility of developing bundled mission assignments for complex catastrophes for all of the FEMA regions. However, NORTHCOM and PACOM have not determined how this regional capability information will be incorporated into their civil support plans. According to DOD doctrine, an effective whole of government approach is only possible when every agency understands the competencies and capabilities of its partners and works together to achieve common goals. This doctrine further states that DOD should interact with non-DOD agencies to gain a mutual understanding of their response capabilities and limitations. By working through the defense coordinating officers to identify an interim set of specific capabilities that DOD could provide in response to a complex catastrophe—instead of waiting for FEMA to complete its five-year regional planning processes and then updating civil support plans—NORTHCOM and PACOM can enhance their preparedness and more effectively mitigate the risk of an unexpected capability gap during the five-year period until FEMA completes its regional plans in 2018. DOD has established an overall command and control framework for a federal military civil support response. However, the command and control structure for federal military forces during incidents affecting multiple states such as complex catastrophes is unclear because DOD has not yet prescribed the roles, responsibilities, and relationships of command elements that may be involved in responding to such incidents. DOD guidance and NORTHCOM civil support plans establish a framework for the command and control of federal military civil support, identifying a range of command elements and structures that may be employed depending on the type, location, magnitude, and severity of an incident, and the scope and complexity of DOD assistance. This framework addresses command and control for federal military forces operating independently or in parallel with state National Guard forces, and it also provides a model for the integrated command and control of federal military and state National Guard civil support. Joint Doctrine and NORTHCOM’s civil support concept plans collectively prescribe specific federal military command and control procedures and relationships for certain types of civil support incidents— such as radiological emergencies—and also identify potential command and control arrangements for incidents of varying scale. For example, for small-scale civil support responses, NORTHCOM’s 2008 civil support concept plan provides that a defense coordinating officer may be used to command and control federal military forces so long as the response force does not exceed the officer’s command and control capability. Should an event exceed that threshold, a task force may be needed to command and control medium-scale military activities. Such a task force could be composed of personnel from a single military service; or, if the scope, complexity, or other factors of an incident require capabilities of at least two military departments, a joint task force may be established. The size, composition, and capabilities of a joint task force can vary considerably depending on the mission and factors related to the operational environment, including geography of the area, nature of the crisis, and the time available to accomplish the mission. For large-scale civil support responses, per the civil support concept plan, NORTHCOM can establish or expand an existing joint task force with multiple subordinate joint task forces, or appoint one or more of its land, air, or maritime functional component commanders to oversee federal forces. U.S. Army North, located at Fort Sam Houston, Texas, is NORTHCOM’s joint force land component commander. Air Force North, located at Tyndall Air Force Base near Panama City, Florida, is NORTHCOM’s joint force air component commander. U.S. Fleet Forces Command, located in Norfolk, Virginia, is NORTHCOM’s joint force maritime component commander. According to NORTHCOM’s civil support concept plan, command and control of federal military forces providing civil support is generally accomplished using the functional component command structure. Within this structure, NORTHCOM transfers operational control of federal military forces to a designated functional component commander. This commander may then deploy a subordinate task force or multiple task forces to execute command and control. For example, for land-based incidents, NORTHCOM would transfer operational control of federal forces to U.S. Army North, which could then deploy one or more of its subordinate command and control task forces. Figure 2 depicts a functional component command and control structure for a land-based federal military response to a major disaster or emergency in the NORTHCOM area of responsibility. In certain cases, such as large-scale civil support responses, federal military and state National Guard forces may operate simultaneously in support of civil authorities. In such instances, a dual-status commander— with authority over both federal military forces and state National Guard forces—should be the usual and customary command arrangement. Federal military forces allocated to the dual-status commander through the request for assistance process are to be under that commander’s control. For events or incidents that affect multiple states, a dual-status commander may be established in individual states. Dual status commanders do not have command and control over state National Guard forces in states that have not designated that commander as a dual status commander. According to NORTHCOM’s civil support concept plan, dual-status commanders provide the advantage of a single commander who is authorized to make decisions regarding issues that affect both federal and state forces under their command, thereby enhancing unity of effort. For example, dual-status authority allows the commander to coordinate and de-conflict federal and state military efforts while maintaining separate and distinct chains of command. Unlike some federal military task forces, dual-status commanders, when employed, are under the direct operational control of NORTHCOM, operating outside of the functional component command structure. Dual-status commanders also fall under a state chain-of-command that extends up through the state Adjutant General and Governor. Figure 3 depicts a command and control structure for a land-based, single-state federal military response to a major disaster or emergency in the NORTHCOM area of responsibility when a dual status commander is employed. The Joint Action Plan for Developing Unity of Effort emphasizes the importance of properly configured command and control arrangements, and DOD doctrine states that operational plans should identify the command structure expected to exist during their implementation. The Joint Action Plan also states that there is a likelihood that the United States will face a catastrophic incident affecting multiple states, and that past multistate emergencies demonstrated a coordinated and expeditious state-federal response is crucial in order to save and sustain lives. However, the command and control structure for federal military forces during multistate incidents is unclear because DOD has not yet prescribed the roles, responsibilities, and relationships among some of the command elements that may be involved in responding to such incidents. This gap in the civil support framework was illustrated by recent events such as National Level Exercise 2011—which examined DOD’s response to a complex catastrophe in the New Madrid Seismic Zone— and the federal military response to Hurricane Sandy led by NORTHCOM in 2012. Citing this gap, officials we spoke with from across the department—including NORTHCOM, U.S. Army North, the Office of the Assistant Secretary of Defense for Homeland Defense and Americas’ Security Affairs, the Joint Staff, and two of the defense coordinating elements—told us that the lack of a multistate command and control structure has created uncertainty regarding the roles and responsibilities of command elements that could be involved in response efforts. National Level Exercise 2011 National Level Exercise 2011 simulated a major earthquake in the central United States region of the New Madrid Seismic Zone that caused widespread casualties and damage to critical infrastructure across eight states. The exercise took place in May 2011 and focused on integrated multi-jurisdictional catastrophic response and recovery activities between over 10,000 federal, regional, state, local, and private sector participants at more than 135 sites across the country. National Level Exercise 2011 helped to identify a gap in DOD’s federal military command and control structure for multistate incidents. The exercise highlighted uncertainty regarding the roles and relationships among federal military command elements—and between such command elements and responding forces. For example, officials from U.S. Army North told us that the exercise revealed that not having a level of command between the dual-status commanders and NORTHCOM did not work well for such a large-scale, multistate incident, in part, because NORTHCOM, in the absence of an operational-level command element, faced challenges in managing the operations of federal military forces across a widespread area. According to DOD doctrine, operational-level commands, such as a functional component commander like the joint force land component commander, can directly link operations to strategic objectives. To address this gap, two task forces were employed to operate between the dual-status commanders and NORTHCOM. While the task forces improved the overall command structure, according to Army officials, there was confusion regarding the role of the task forces in relation to the dual-status commanders, as well as federal military forces in states without a dual-status commander—which some of the state governors involved in this exercise chose not to appoint. National Level Exercise 2011 illustrated other potential challenges associated with the lack of a multistate command and control structure. For example, according to NORTHCOM’s publication on dual status commander standard operating procedures, NORTHCOM is responsible for coordinating the allocation of federal military forces among multiple states or areas—that is, determining where and how to employ federal military forces, particularly when there are similar requests for assistance. NORTHCOM officials told us that the command, looking at the totality of requests for assistance, would normally make such force employment determinations based on FEMA’s prioritization of requests. However, in the absence of a multistate command and control structure to provide the necessary situational awareness over forces already engaged or available, NORTHCOM may be impaired in its ability to make additional informed decisions regarding the appropriate allocation of federal military resources. For example, at the outset of a complex catastrophe, DOD should expect to receive hundreds of requests with possibly redundant requirements and no prioritization. Similarly, a preliminary NORTHCOM analysis found that the current request for assistance process is unlikely to handle the timely demands that a complex catastrophe would incur, and that the prioritization of these requests would be unclear in the initial hours and days of the incident. Army officials told us that without an intermediate command entity to collate operational data and inform force allocation decisions, it was unclear how DOD would prioritize requests for federal military resources when there are multiple requests for the same or similar capabilities. Officials from the Joint Staff, and defense coordinating elements echoed these concerns, noting that it is unclear how DOD would prioritize the allocation of federal military forces across an affected multistate area when two or more dual-status commanders are in place. Civil Support Operations during Hurricane Sandy DOD’s activities during and after Hurricane Sandy in October and November 2012 represented its largest civil support response since Hurricane Katrina in 2005. DOD received an unprecedented number of requests for assistance, specifically in the areas of power restoration and gasoline distribution. According to DOD, the cascading effects of the failures of critical infrastructure in New York and New Jersey—including mass power outages, major transportation disturbances, and disruption of the fuel distribution system—resembled those of a complex catastrophe. Challenges associated with the lack of a multistate command and control construct were evident in the federal military response to Hurricane Sandy, which marked the first occasion in which multiple dual-status commanders were employed. For example, NORTHCOM officials told us that the command recognized the need for a command and control element between the dual-status commanders and NORTHCOM and, in early November 2012, employed a joint coordinating element—a concept without definition or doctrinal basis. According to DOD after action reports for Hurricane Sandy, the purpose of the joint coordinating element, employed as an extension of the joint force land component commander, was to aid in the coordination, integration, and synchronization of federal military forces. However, officials we spoke with from across the department told us that the joint coordinating element’s role was neither well-defined nor well-communicated, rendering it largely ineffective. For example, officials from the Office of the Assistant Secretary of Defense for Homeland Defense and Americas’ Security Affairs told us that uncertainty regarding the role of the joint coordinating element contributed to confusion during DOD’s response to Hurricane Sandy. Additionally, officials from one of the defense coordinating elements involved in the federal military response to Hurricane Sandy told us that the roles and responsibilities of the dual-status commander, joint coordinating element, and defense coordinating officer were unclear. According to these officials, such uncertainty hampered unity of command across state boundaries and created confusion regarding command and control relationships and force allocation across the affected multistate area. Officials from U.S. Army North and the Joint Staff similarly told us that there were challenges in allocating federal military forces during the response to Hurricane Sandy, in part, because of the command and control structure that was employed. Joint Staff officials noted that DOD’s joint coordinating element had limited visibility and control over federal military forces. DOD after action reports covering the federal military response to Hurricane Sandy also found that the command and control structure for federal military forces operating in the affected area was not clearly defined, resulting in the degradation of situational awareness and unity of effort, and the execution of missions without proper approval. For example, a U.S. Army North after action review concluded that while the joint coordinating element initially had a positive effect on situational awareness, inconsistencies in its purpose and task caused numerous problems. Table 1 shows select Hurricane Sandy after action report observations pertaining to command and control. According to NORTHCOM officials, the command has recognized the need for a multistate command and control construct, is analyzing this issue, and plans to incorporate the results of its analysis into the command’s updated civil support concept plan by October 2013. NORTHCOM previously produced an analysis in March 2012 that identified a command and control gap for multistate incidents along with potential mitigation options, but this analysis was never approved. Also, we recommended in 2012 that DOD develop implementation guidance for the dual-status commanders that may partially address these challenges by covering, among other things, criteria for determining when and how to use dual-status commanders during civil support incidents affecting multiple states. DOD agreed with this recommendation, and officials from the Office of the Assistant Secretary of Defense for Homeland Defense and Americas’ Security Affairs told us that they are in the process of drafting such guidance. DOD has established a command and control framework for single-state civil support responses; but, until it develops, clearly defines, communicates, and implements a multistate command and control construct, federal military forces responding to a multistate event will likely face a range of operational ambiguities that could heighten the prospects for poorly synchronized response to major disasters and emergencies. For example, uncertainty regarding command structure may negatively affect the flow of information and prevent commanders from having adequate situational awareness over DOD’s response, leading to reduced operational effectiveness and ineffective use of DOD forces. By identifying roles, responsibilities, and command relationships during multistate incidents such as complex catastrophes, DOD will be better positioned to manage and allocate forces across a multistate area, and ensure effective and organized response operations. DOD acknowledged in its 2013 strategy for homeland defense and civil support that the department is expected to respond rapidly and effectively to civil support incidents, including complex catastrophes—incidents that would cause extraordinary levels of mass casualties and severely affect life-sustaining infrastructure. The effects of such an incident would exceed those caused by any previous domestic incident. NORTHCOM and PACOM, the combatant commands responsible for carrying out the department’s civil support mission, cannot effectively plan for complex catastrophes in the absence of clearly defined capability requirements and any associated capability gaps. Consequently, DOD’s decision to delay identifying capabilities that could be requested by civil authorities during a complex catastrophe until FEMA completes its five-year regional planning efforts may lead to a delayed response from DOD and ineffective intergovernmental coordination should a catastrophic event occur before 2018. An interim set of specific capabilities that DOD could refine as FEMA completes its regional planning process should help to mitigate the risk of a potential capability gap during a complex catastrophe. Further, developing, clearly defining, communicating, and implementing a command and control construct for federal military response to multistate civil support incidents would also likely enhance the effectiveness of DOD’s response. National Level Exercise 11 and Hurricane Sandy highlighted this critical gap in command and control. Without a multistate command and control construct, DOD’s response to a multistate incident, such as a complex catastrophe, may be delayed, uncoordinated, and could result in diminished efficacy. We recommend that the Secretary of Defense take the following two actions: (1) To reduce the department’s risk in planning for a complex catastrophe and enhance the department’s ability to respond to a complex catastrophe through at least 2018, direct the Commanders of NORTHCOM and PACOM to work through the defense coordinating officers to identify an interim set of specific DOD capabilities that could be provided to prepare for and respond to complex catastrophes while FEMA completes its five-year regional planning cycle. (2) To facilitate effective and organized civil support response operations, direct the Commander of NORTHCOM—in consultation with the Joint Staff and Under Secretary of Defense for Policy, acting through the Assistant Secretary of Defense for Homeland Defense and Americas’ Security Affairs—to develop, clearly define, communicate, and implement a construct for the command and control of federal military forces during multistate civil support incidents such as complex catastrophes—to include the roles, responsibilities, and command relationships among potential command elements. We provided a draft of this report to DOD for review and comment. DOD concurred with both recommendations and cited ongoing activities to address our recommendations. DOD’s comments are reprinted in their entirety in appendix II. In addition, DOD provided technical comments, which we have incorporated into the report as appropriate. DOD concurred with our recommendation to identify an interim set of specific capabilities that could be provided to prepare and respond to complex catastrophes. DOD stated that it recognizes the need for detailed planning to ensure the department can provide the needed capabilities, and is planning to work with defense coordinating officers and emergency support function leads to develop a set of capabilities. DOD also concurred with our recommendation to develop, clearly define, communicate, and implement a construct for command and control of federal military forces during multistate civil support incidents such as complex catastrophes. DOD stated that it recognizes the need for this and will ensure, as part of its contingency planning, that a range of command and control options are available for NORTHCOM and PACOM during multistate incidents. We believe that these actions will better position DOD to effectively and efficiently provide support during a complex catastrophe. We also provided a draft of this report to DHS for review and comment. DHS provided technical comments, which were incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of this report until 30 days from the report date. At that time, we will distribute this report to the Secretary of Defense, the Acting Secretary of Homeland Security and other relevant officials. We are also sending copies of this report to interested congressional committees. The report is also available on our Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4523 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in Appendix III. To determine the extent to which the Department of Defense (DOD) has planned for and identified capabilities to respond to a complex catastrophe, we assessed current DOD civil support planning documents, guidance, and after action reports from civil support incidents and exercises since 2011, and met with Office of Secretary of Defense, Joint Staff, combatant command, military service, defense agency, and Reserve officials. We assessed planning guidance issued by the Joint Staff and Secretary of Defense and DOD joint doctrine against interviews with DOD and combatant command officials to determine how DOD was incorporating a complex catastrophe into civil support plans. We also met with several defense coordinating officers and Federal Emergency Management Agency (FEMA) officials to determine what planning was being conducted at the regional level. We met with defense coordinating officers from regions that were impacted by Hurricane Sandy, participated in National Level Exercise 11, and completed their regional plans to gain an understanding of issues across a number of FEMA regions. NORTHCOM’s deadline for completion of a complex catastrophe plan is September 2013 and U.S. Pacific Command (PACOM’s) deadline is September 2014, which coincides with the commands’ planning cycles. To determine NORTHCOM’s and PACOM’s planning requirements, we reviewed the July 2012 Secretary of Defense memorandum on complex catastrophes that requires NORTHCOM and PACOM to incorporate complex catastrophe scenarios into the commands’ civil support plans and the Joint Staff planning order related to complex catastrophes. We compared planning requirements directed by the July 2012 Secretary of Defense memorandum on complex catastrophes and other applicable guidance to the federal and regional-level planning efforts to identify capabilities for a complex catastrophe. We met with officials at NORTHCOM and PACOM to determine how the commands are incorporating a complex catastrophe scenario into civil support plans by the September 2013 and September 2014 deadlines. Further, we reviewed recent GAO reports describing long-standing problems in planning and identifying civil support capabilities and gaps. To determine the extent to which DOD has established a command and control construct for complex catastrophes and other multistate incidents, we analyzed DOD doctrine and plans related to operational planning and command and control. Specifically, we assessed DOD and interagency guidance including NORTHCOM’s civil support plan, DOD’s civil support joint publication, and Joint Action Plan for Developing Unity of Effort and DOD after action reports from Hurricane Sandy to determine how the existing command and control construct addressed complex catastrophes and other multistate incidents. We also reviewed laws relevant to disaster response and domestic employment of federal military forces, including the Stafford Act and certain provisions of Title 10 of the United States Code, as well as national-level policy pertaining to response coordination and planning, including the National Response Framework and National Incident Management System. In addition, we reviewed relevant documentation—including briefings, analyses, and after action reports related to Hurricane Sandy—and met with Office of the Secretary of Defense, Joint Staff, combatant command, military service, and National Guard officials to determine the extent to which DOD had analyzed multistate command and control issues. In addressing both of our audit objectives, we met with officials from the DOD and the Department of Homeland Security organizations identified in table 2. We conducted this performance audit from August 2012 to September 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above Marc Schwartz, Assistant Director; Tracy Burney; Ryan D’Amore; Susan Ditto; Gina Flacco; Michael Silver; Amie Steele; and Michael Willems made key contributions to this report. Homeland Defense: DOD Needs to Address Gaps in Homeland Defense and Civil Support Guidance. GAO-13-128. Washington, D.C.: October 24, 2012. Homeland Defense: Continued Actions Needed to Improve Management of Air Sovereignty Alert Operations. GAO-12-311. Washington, D.C.: January 31, 2012. Homeland Defense and Weapons of Mass Destruction: Additional Steps Could Enhance the Effectiveness of the National Guard’s Life Saving Response Forces. GAO-12-114. Washington, D.C.: December 7, 2011. Homeland Defense: Actions Needed to Improve Planning and Coordination for Maritime Operations. GAO-11-661. Washington, D.C.: June 22, 2011. Intelligence, Surveillance, and Reconnaissance: DOD Needs a Strategic, Risk-Based Approach to Enhance Its Maritime Domain Awareness. GAO-11-621. Washington, D.C.: June 20, 2011. Homeland Defense: DOD Needs to Take Actions to Enhance Interagency Coordination for Its Homeland Defense and Civil Support Missions. GAO-10-364. Washington, D.C.: March 30, 2010. Homeland Defense: DOD Can Enhance Efforts to Identify Capabilities to Support Civil Authorities during Disasters. GAO-10-386. Washington, D.C.: March 30, 2010. Homeland Defense: Planning, Resourcing, and Training Issues Challenge DOD’s Response to Domestic Chemical, Biological, Radiological, Nuclear and High-Yield Explosive Incidents. GAO 10-123. Washington, D.C.: October 7, 2009. Homeland Defense: U.S. Northern Command Has a Strong Exercise Program, but Involvement of Interagency Partners and States Can Be Improved. GAO-09-849. Washington, D.C.: September 9, 2009. National Preparedness: FEMA Has Made Progress, but Needs to Complete and Integrate Planning, Exercise, and Assessment Efforts. GAO-09-369. Washington, D.C.: April 30, 2009. Emergency Management: Observations on DHS’s Preparedness for Catastrophic Disasters. GAO-08-868T. Washington, D.C.: June 11, 2008. National Response Framework: FEMA Needs Policies and Procedures to Better Integrate Non-Federal Stakeholders in the Revision Process. GAO-08-768. Washington, D.C.: June 11, 2008. Homeland Defense: Steps Have Been Taken to Improve U.S. Northern Command’s Coordination with States and the National Guards Bureau, but Gaps Remain. GAO-08-252. Washington, D.C.: April 16, 2008. Homeland Defense: U.S. Northern Command Has Made Progress but Needs to Address Force Allocation, Readiness Tracking Gaps, and Other Issues. GAO-08-251. Washington, D.C.: April 16, 2008. Continuity of Operations: Selected Agencies Tested Various Capabilities during 2006 Governmentwide Exercise. GAO-08-105. Washington, D.C.: November 19, 2007. Homeland Security: Preliminary Information on Federal Action to Address Challenges Faced by State and Local Information Fusion Centers. GAO-07-1241T. Washington, D.C.: September 27, 2007. Homeland Security: Observations on DHS and FEMA Efforts to Prepare for and Respond to Major and Catastrophic Disasters and Address Related Recommendations and Legislation. GAO-07-1142T. Washington, D.C.: July 31, 2007. Influenza Pandemic: DOD Combatant Commands’ Preparedness Efforts Could Benefit from More Clearly Defined Roles, Resources, and Risk Mitigation. GAO-07-696. Washington, D.C.: June 20, 2007. Homeland Security: Preparing for and Responding to Disasters. GAO-07-395T. Washington, D.C.: March 9, 2007. Catastrophic Disasters: Enhanced Leadership, Capabilities, and Accountability Controls Will Improve the Effectiveness of the Nation’s Preparedness, Response, and Recovery System. GAO-06-903. Washington, D.C.: September 6, 2006. Homeland Defense: National Guard Bureau Needs to Clarify Civil Support Teams’ Mission and Address Management Challenges. GAO-06-498. Washington, D.C.: May 31, 2006. Hurricane Katrina: Better Plans and Exercises Needed to Guide the Military’s Response to Catastrophic Natural Disasters. GAO-06-643. Washington, D.C.: May 15, 2006. Hurricane Katrina: GAO’s Preliminary Observations Regarding Preparedness, Response, and Recovery. GAO-06-442T. Washington, D.C.: March 8, 2006. Emergency Preparedness and Response: Some Issues and Challenges Associated with major Emergency Incidents. GAO-06-467T. Washington, D.C.: February 23, 2006. GAO’S Preliminary Observations Regarding Preparedness and Response to Hurricanes Katrina and Rita. GAO-06-365R. Washington, D.C.: February 1, 2006. Homeland Security: DHS’ Efforts to Enhance First Responders’ All- Hazards Capabilities Continue to Evolve. GAO-05-652. Washington, D.C.: July 11, 2005. Homeland Security: Process for Reporting Lessons Learned from Seaport Exercises Needs Further Attention. GAO-05-170. Washington, D.C.: January 14, 2005.
The United States continues to face an uncertain and complicated security environment, as major disasters and emergencies, such as the Boston Marathon bombings and Hurricane Sandy illustrate. DOD supports civil authorities' response to domestic incidents through an array of activities collectively termed civil support. In July 2012, DOD began to plan for federal military support during a complex catastrophe--such as a large earthquake that causes extraordinary levels of casualties or damage, and cascading failures of critical infrastructure. GAO was asked to assess DOD's planning and capabilities for a complex catastrophe. This report assesses the extent to which DOD has (1) planned for and identified capabilities to respond to complex catastrophes, and (2) established a command and control construct for complex catastrophes and other multistate incidents. To do so, GAO analyzed civil support plans, guidance, and other documents, and interviewed DOD and FEMA officials. U.S. Northern Command (NORTHCOM) and U.S. Pacific Command (PACOM) are updating their existing civil support plans to include a complex catastrophe scenario, as directed by the Secretary of Defense and the Joint Staff. However, the commands are delaying the identification of capabilities that could be provided to execute the plans until the Federal Emergency Management Agency (FEMA), the lead federal response agency, completes its regional planning efforts in 2018. NORTHCOM officials told us that the command's civil support plan would describe some general force requirements, such as types of military units, but that it will not identify specific capabilities that could be provided to civil authorities during a complex catastrophe. Similarly, according to PACOM officials, PACOM's plan also will not identify such capabilities. Still, defense coordinating officers--senior military officers who work closely with federal, state, and local officials in FEMA's regional offices--have taken some initial steps to coordinate with FEMA during its regional planning process to identify capabilities that the Department of Defense (DOD) may be required to provide in some regions. For example, a defense coordinating officer has helped one of the FEMA regions that has completed its regional plan to develop bundled mission assignments that pre-identify a group of capabilities that region will require during a complex catastrophe. DOD doctrine states that the department should interact with non-DOD agencies to gain a mutual understanding of their response capabilities and limitations. By working through the defense coordinating officers to identify an interim set of specific capabilities for a complex catastrophe-- instead of waiting for FEMA to complete its five-year regional planning process-- NORTHCOM and PACOM can enhance their preparedness and mitigate the risk of an unexpected capability gap during the five-year period until FEMA completes its regional plans in 2018. DOD has established a command and control framework for a federal military civil support response; however, the command and control structure for federal military forces during complex catastrophes is unclear because DOD has not developed a construct prescribing the roles, responsibilities, and relationships among command elements that may be involved in responding to such incidents across multiple states. This gap in the civil support framework was illustrated by recent events such as National Level Exercise 2011--which examined DOD's response to a complex catastrophe--and the federal military response to Hurricane Sandy in 2012. For example, officials from NORTHCOM's Army component told us that the exercise revealed that the absence of an operationallevel command element created challenges for NORTHCOM in managing the operations of federal military forces during a large-scale, multistate incident. Similarly, DOD after action reports on Hurricane Sandy found that the command and control structure for federal military forces was not clearly defined, resulting in the degradation of situational awareness and unity of effort, and the execution of missions without proper approval. DOD doctrine states that operational plans should identify the command structure expected to exist during their implementation. By identifying roles, responsibilities, and command relationships during multistate incidents such as complex catastrophes, DOD will be better positioned to manage and allocate resources across a multistate area and ensure effective and organized response operations. GAO recommends that combatant commands (1) work through the defense coordinating officers to develop an interim set of specific DOD capabilities that could be provided to prepare for and respond to complex catastrophes, as FEMA completes its five-year regional planning cycle; and (2) develop, clearly define, communicate, and implement a construct for the command and control of federal military forces during multistate civil support incidents such as complex catastrophes. DOD concurred with both recommendations.
Congress authorized the President to establish NDF in 1992 under section 504 of the FREEDOM Support Act. The legislation authorized the President to use NDF to promote a variety of bilateral and multilateral nonproliferation and disarmament activities. In 1994, the President delegated authority for the program to the Secretary of State, who subsequently delegated authority for the program to the Under Secretary of State for Arms Control and International Security. The NDF office, within ISN, is responsible for day-to-day management of the program. The NDF Director leads the office, which has a staff of 16 people, including both State officials and contract employees. Congress funds NDF annually through the Nonproliferation, Anti- terrorism, Demining, and Related Programs appropriations account, within the Foreign Operations, Export Financing, and Related Programs Appropriations Acts. NDF received $10 million in initial funding for fiscal year 1994. Since fiscal year 1994, NDF has received $597 million in total appropriations. From fiscal years 2007 through 2012, NDF appropriations ranged from a high of $118 million in fiscal year 2009 to a low of $30 million in fiscal year 2012. According to State, NDF is unusual among U.S. foreign assistance programs in that it does not request funding for specific activities as part of its annual Congressional Budget Justification. The NDF Director stated that this helps ensure that NDF has the flexibility to respond to nonproliferation and disarmament opportunities as they arise, rather than tying NDF funds to particular projects or locations in advance. The FREEDOM Support Act provided NDF with a broad mission to fund bilateral and multilateral nonproliferation and disarmament activities and annual appropriations bills have consistently granted NDF other key authorities. NDF has used its authorities under the FREEDOM Support Act to fund a diverse set of projects. Table 1 outlines NDF activities authorized by the FREEDOM Support Act and provides examples of the types of activities NDF has funded. State officials and NDF program documents have characterized NDF’s mission as focused on funding unanticipated or unusually difficult projects of high priority to the U.S. government. Figure 1 illustrates the dismantling of a Scud missile as part of an NDF- funded project in Ukraine. In addition to the authorities granted to NDF in the FREEDOM Support Act, annual appropriations bills have also consistently provided NDF with three key authorities that are designed to increase NDF’s flexibility in carrying out nonproliferation and disarmament activities around the globe, as opportunities arise. These include the authority to (1) undertake projects notwithstanding other provisions of law (notwithstanding authority); (2) implement projects anywhere in the world or through international organizations when it is in the national security interest of the United States to do so, notwithstanding provisions of the FREEDOM Support Act that limited certain NDF activities to the independent states of the former Soviet Union (FSU) (geographic authority); and (3) use funding without restriction to fiscal year (no-year budget authority). State uses a multistep process to review NDF project proposals and determine which projects to fund, as shown in figure 2. According to NDF officials, NDF does not typically develop its own project proposals. Rather, other agencies, such as DOD and DOE, and other State offices, such as the Office of Export Control Cooperation, submit project proposals. NDF’s Review Panel, which is chaired by the Assistant Secretary of State from ISN, reviews these proposals. Two ISN Deputy Assistant Secretaries of State and the Assistant Secretaries of State from the Bureau of Political-Military Affairs and the Bureau of Arms Control, Verification, and Compliance serve as the other voting members on the panel. Officials from other U.S. agencies, including DOD, DOE, OMB, the Department of Commerce, and the Department of Homeland Security, as well as representatives from the National Security Council and U.S. intelligence community, also attend panel meetings. After reviewing the project proposals, the voting members of the NDF Review Panel make recommendations to State’s Under Secretary for Arms Control and International Security to approve, deny, or defer projects. In the Review Panel meetings, members can also propose modifications, such as increasing or decreasing the amount of funding for a project. The Under Secretary has the final authority to approve a project. NDF officials stated that in certain cases—for example, if a project is particularly urgent—the NDF Review Panel may not formally meet to review a proposal before it is submitted to the Under Secretary. In those cases, NDF instead may discuss the proposal with other Review Panel agencies in a different venue, such as at a National Security Council meeting. After the Under Secretary approves a project, but before work begins, State provides a 15-day advance notification to Congress to inform it of State’s intent to begin work on the project. As part of the notification, NDF informs Congress of its intent to obligate a specified amount of funds on the project. NDF then considers these funds designated for that project and not available for use on other projects, unless a subsequent notification is made. From fiscal years 1994 through 2012, NDF notified Congress of its intent to initiate work on 179 projects. NDF subsequently cancelled 19 of these projects after their notification and put an additional project on hold because of congressional concerns. As of the end of fiscal year 2012, NDF had 33 active projects. NDF also reported that, as of the end of fiscal year 2012, it had an additional 42 projects for which work was completed or cancelled, and the financial review of the projects was finished. In accordance with NDF close-out procedures, NDF is in the process of seeking approval from the Under Secretary for Arms Control and International Security before officially closing them. Since the beginning of fiscal year 2007, NDF has notified Congress of its intent to initiate work on a total of 24 projects, with a high of 15 in fiscal year 2010 and a low of zero in fiscal year 2011. Figure 3 shows the number of congressionally-notified projects from fiscal years 2007 through 2012. NDF funding amounts for projects vary significantly. NDF has notified Congress of its intent to spend as much as $50 million to as little as $179,000 on individual projects initiated since the beginning of fiscal year 2007. The lengths of projects also vary. For example, since the beginning of fiscal year 2007, NDF has closed out projects that were completed in as little as a few months to more than 9 years. In addition, some projects are follow-up projects that build on projects initiated in earlier fiscal years. For example, beginning in fiscal year 1998, NDF has undertaken five separate projects—the most recent of which was initiated in fiscal year 2010—to assist the government of Kazakhstan in shutting down a nuclear reactor in Aktau. NDF divides its projects into four categories: (1) destruction and conversion, (2) safeguards and verification, (3) enforcement and interdiction, and (4) education and training. Since the beginning of fiscal year 2007, State has committed the most resources to projects in the destruction/conversion category. In fiscal years 2007 through 2012, 39 percent of NDF funding for new projects went to projects in this category. Figure 4 shows a breakdown of funding for NDF among the four project categories, as well as administrative expenses, for fiscal years 2007 through 2012. NDF has several key authorities that provide it significant operational flexibility; however, it has not determined its needed carryover balances and it has taken years to close out many of its projects in the absence of guidance for closing them. Annual appropriations bills have consistently provided NDF with three key authorities that it has used to carry out its activities. First, NDF has used its notwithstanding authority to fund projects in countries where other U.S. programs are barred from operating by U.S. sanctions or other legal restrictions. Second, NDF has used its geographic authority to fund projects in a range of countries around the globe. Third, NDF has used its no-year budget authority to carry over balances not designated for specific projects from one year to the next. However, NDF has not determined appropriate levels for these balances, which have increased significantly in the past several years. Additionally, NDF has taken many years to close some projects where work was never started, or was suspended, and has not established guidance for determining when inactive projects should be closed out and unexpended no-year funds made available for other projects. Annual appropriations acts have consistently granted NDF notwithstanding authority, which allows NDF to undertake projects “notwithstanding any other provision of law.” As a result, NDF has the ability to fund projects in countries where other U.S. programs are generally barred from operating by U.S. legal restrictions. For example, when North Korea agreed to the disablement of its Yongbyon nuclear reactor in 2007 after progress in diplomatic talks, NDF was able to fund the project because of its notwithstanding authority, while other U.S. agencies, such as DOD and DOE, could not because various U.S. legal restrictions limited the assistance they could provide the country. According to State officials, NDF’s broad notwithstanding authority is uncommon among U.S. government programs. For example, the 2010 National Defense Authorization Act provided DOD’s Cooperative Threat Reduction (CTR) program notwithstanding authority for the first time in the program’s history and granted only limited use of the authority.cannot use its notwithstanding authority for more than 10 percent of CTR’s appropriation for a given fiscal year and must meet other requirements before exercising the authority, such as obtaining concurrence from the Secretaries of State and Energy. When seeking to use its notwithstanding authority, NDF requests approval from the Under Secretary for Arms Control and International Security. According to the NDF Director, when NDF was established, State decided that NDF’s notwithstanding authority should, as a matter of policy, be approved at the Under Secretary level, rather than at a lower level. intent to use the authority as part of the 15-day congressional notification process. Although U.S. law does not require that State inform Congress of NDF’s use of its notwithstanding authority, the conference report accompanying the fiscal year 2012 Consolidated Appropriations Act directed the Secretary of State to notify the Committees on Appropriations in writing, within 5 days of exercising NDF’s notwithstanding authority. The conference report also directed that the notification include a justification for the use of the authority. State noted that legally, notwithstanding authority applies to NDF funds by the terms of the legislation and does not require a formal determination to rely upon this authority. Secretary for approval given the sensitive nature of the projects. In the final three cases, NDF requested the use of its notwithstanding authority for classified projects whose details cannot be publicly reported. In those cases where NDF requested the use of its notwithstanding authority to overcome specific laws or regulations, it identified several different legal restrictions it needed to overcome. For example: NDF requested the use of its notwithstanding authority to initiate work on a project in Libya in fiscal year 2012. Among other things, the authority was required to overcome restrictions on U.S. security assistance to countries that engage in a consistent pattern of gross violations of human rights. In the case of two projects at the Yongbyon site in North Korea, NDF requested the use of its notwithstanding authority to, among other things, overcome “Glenn Amendment” restrictions within the Arms Export Control Act. The Glenn Amendment triggers U.S. sanctions if the President determines that a nonnuclear country (as defined by the Nuclear Nonproliferation Treaty) has detonated a nuclear explosive device. NDF also requested the use of its notwithstanding authority to overcome a restriction that the Foreign Assistance Act would have imposed on a project in Iraq. The Act restricts U.S. assistance to countries that have severed diplomatic relations with the United States and which have not entered into a new bilateral assistance agreement once diplomatic relations have resumed. At the time of the project, there were concerns regarding the status of the United States’ bilateral agreement with Iraq. In addition to using its notwithstanding authority to bypass restrictions on U.S. assistance to particular countries, NDF has in some cases also used its notwithstanding authority to overcome laws and regulations pertaining to contracting and acquisitions. For example, NDF used its notwithstanding authority on some contracts to overcome Federal Acquisition Regulation (FAR) competition requirements, according to a 2004 State Inspector General report. Additionally, a 2009 National Academies of Science report examining options for strengthening and expanding DOD’s CTR program noted that, because of its notwithstanding authority, NDF is not subject to contracting requirements, including the FAR, which CTR must follow. The report noted that this ability may allow NDF to undertake certain projects more quickly and at less expense than CTR. However, according to State officials, NDF has not typically used the program’s notwithstanding authority to bypass federal contracting laws and regulations. State officials said that, while NDF has almost always relied on sole-source bids, rather than a competitive bidding process, it primarily selected contractors to implement projects using existing flexibilities in the law and regulations available to all agencies. For example, State officials stated that NDF has relied on provisions in the FAR that permit sole-source contracts in situations where there is an urgent and compelling need. In addition to competition requirements, some NDF officials stated that NDF may use its notwithstanding authority to bypass other types of acquisition requirements, such as “Buy America” provisions. For example, one NDF official stated that to expedite work on NDF’s project at the Yongbyon reactor in North Korea, NDF purchased some of the equipment used from China. Since 1994, annual appropriations acts have provided NDF with broad geographic authority to fund projects worldwide as nonproliferation and disarmament opportunities arise. NDF’s geographic authority allows it to fund projects outside the states of the FSU if the Under Secretary for Arms Control and International Security makes a determination that it is in the national security interest of the United States to do so. NDF’s authority to fund projects globally since the program’s start in 1994 is in contrast to the authorities of some other U.S. nonproliferation programs. For example, DOD’s CTR program was not authorized to fund any projects outside the FSU until the passage of the Fiscal Year 2004 Defense Authorization Bill and continued to face various restrictions on conducting work outside the FSU until 2007. In addition, as we reported in December 2011, many of DOE’s Defense Nuclear Nonproliferation programs, which originated in the early 1990s following the dissolution of the Soviet Union, have focused primarily on improving nuclear security in Russia. The NDF Director stated that, while NDF has been used to supplement projects in the FSU or to fill emergency gaps, its primary emphasis has always been on other parts of the world. Since 1994, NDF has used its geographic authority to fund projects in Central and South America, North and Sub-Saharan Africa, Eastern and Western Europe, the Balkans, the Middle East, and Asia. As shown in figure 5, NDF has funded projects in several different countries since the beginning of fiscal year 2007, including Afghanistan, Egypt, Kazakhstan, North Korea, and Ukraine, among others. It has also funded a limited number of projects in the United States, including the construction of training facilities at DOE’s Hazardous Materials Management and Emergency Response site in Washington State for the purpose of training foreign nationals. Over the life of the program, Congress has consistently granted NDF no- year budget authority in annual appropriations bills. This authority makes NDF appropriations available for obligation until expended, rather than requiring them to be obligated within a particular time period, such as a fiscal year. This authority has allowed NDF to carry over balances across multiple fiscal years that it has not designated for specific projects. NDF considers money to be designated for a specific project and no longer available for use on other projects at the point when it notifies Congress of its intent to fund the project, unless NDF renotifies the funds for the purpose of another project. NDF’s carryover balances have increased over time and are at historically high levels. Figure 6 provides an overview of NDF’s various categories of funding and how NDF accumulates carryover balances. In addition, NDF’s no-year budget authority allows it to close projects and apply the unexpended funds to future projects; however, NDF has sometimes delayed in closing out some projects for many years, including projects where no work ever occurred or was suspended. Until projects are closed, any unexpended project funds are not reported as part of NDF’s carryover balances. As a result, NDF’s carryover balance is likely understated. NDF’s carryover balances have grown significantly in the past few years to historically high levels. NDF’s carryover balance peaked at the end of fiscal year 2009 at $122 million in unnotified funds, which were carried over into fiscal year 2010 as shown in figure 7. Unnotified funds include funds never designated for a project, as well as any unobligated and unexpended project funds that once again become available as unnotified funds when a project is closed. Before the end of fiscal year 2009, NDF’s balance carried over into the next fiscal year had been $10 million or higher three times since the program began in fiscal year 1994 and had never been higher than $22 million. NDF’s carryover balance was $86 million at the end of fiscal year 2012. NDF has not established a formal means of determining the amount of money it needs to carry over from year to year to respond to unanticipated nonproliferation and disarmament opportunities.According to the Assistant Secretary of State for International Security and Nonproliferation, State management is aware of the growth in NDF’s carryover balances and is committed to spending them down as opportunities consistent with the mission of NDF arise. In the past several years, increases in NDF’s annual appropriations from levels in earlier fiscal years, as well as the initiation of work on a smaller number of projects, have contributed to NDF’s increased carryover balances. As shown in figure 7 above, NDF’s appropriation was never more than $30 million before fiscal year 2005 and was only higher than $20 million in one fiscal year. However, from fiscal years 2005 through 2012, NDF’s appropriation has been $30 million or more every year. NDF’s appropriation reached a high of $118 million in fiscal year 2009, which included $77 million in a supplemental appropriation. NDF also initiated a limited number of projects in the past 2 years. For example, it initiated only one new project in fiscal year 2012 and no projects in fiscal year 2011. In total, NDF notified Congress of 24 projects from fiscal years 2007 through 2012, compared with 63 projects from fiscal years 2001 through 2006. In part, NDF officials stated that the decline in the number of projects initiated was caused by the creation of other U.S. government programs that are now able to fund various activities from their own budgets that might have previously required NDF funding. For example, NDF officials noted that NDF previously funded certain types of export control assistance activities that State’s Export Control and Related Border Security Assistance program is now able to fund and implement. However, NDF officials noted that while NDF has initiated a smaller number of projects in fiscal years 2007 through 2012, many of the projects it has initiated have involved significantly larger amounts of notified funds than in fiscal years 2001 through 2006. For example, NDF had only one project with over $10 million in notified funds in fiscal years 2001 through 2006, while it had 11 projects with over $10 million in notified funds in fiscal years 2007 through 2012. NDF’s budget request for fiscal year 2013 was $30 million. Other U.S. government programs also receive no-year money and have the ability to carry over balances from year to year. We have previously reported on efforts by some of these programs to determine appropriate carryover balance amounts. For example, in contrast with NDF, DOE’s National Nuclear Security Administration has established thresholds for the carryover balances of its Defense Nuclear Nonproliferation programs. These threshold amounts are based upon specified percentages of the total funds available to each of the Defense Nuclear Nonproliferation programs in a given fiscal year. As we reported in December 2011, if programs’ carryover balances exceed these thresholds, they will trigger additional scrutiny by the National Nuclear Security Administration as to whether the carryover balances are appropriate to meet program requirements. NDF also maintains a significant amount of funds that it notified to Congress in the past for projects, but has not yet obligated. Of the 32 active NDF projects initiated in fiscal year 2010 or earlier, 25 percent of the total notified funds have not yet been obligated. This represents more than $66 million in notified but unobligated funds. As some NDF projects take many years to complete, NDF does not necessarily obligate all funds early in their implementation. However, of the 32 active NDF projects initiated in fiscal year 2010 or earlier, we identified 5 projects for which less than 25 percent of the notified funds had been obligated. Because NDF’s funding is no-year money, NDF can close projects for which it has never started work, or has suspended work, and apply the unexpended funds to future projects. However, NDF has not established guidance for determining when it should close out inactive projects. As a result, NDF funds may be tied up for years in projects where no work is occurring, precluding the funds’ use for other projects. For example, NDF maintains over $24 million in unobligated funds from a $25 million project in North Korea that it notified to Congress in fiscal year 2008. NDF has not obligated any of the funds for this project since North Korea expelled International Atomic Energy Agency inspectors and U.S. monitors from the country in April 2009 and work on the project was abruptly halted. Additionally, NDF has not yet obligated any of the $750,000 notified in fiscal year 2005 for a project to support Proliferation Security Initiative interdiction activities. According to NDF officials, no funds have been obligated to date because they have not identified any Proliferation Security Initiative activities that warranted the use of the funds. NDF has not developed guidance that establishes time frames for closing cancelled or completed projects to ensure that they are closed out in a timely manner. NDF data show that in the past, NDF has taken years to cancel and close some projects where little or no work ended up occurring. Of the 61 projects NDF has closed out since the beginning of fiscal year 2007, 16 were cancelled projects for which less than 20 percent of the notified funds were ever obligated and expended. For six of these cancelled projects, NDF took more than 10 years to close them out from the date they were initially notified to Congress, and for an additional 3 projects NDF took more than 5 years to close them out from the date they were notified to Congress. In total, these 9 projects had over $8.3 million in notified funds that were never expended. In addition to cancelled projects, NDF has taken years to close some completed projects. For example, of the 61 projects NDF has closed out since the beginning of fiscal year 2007, we identified 13 that NDF closed out more than 10 years after work on the project was completed and an additional 18 that NDF closed out more than 5 years after work on the project was completed. These 31 projects had over $3.5 million in notified but unexpended funds. The unexpended funds for these cancelled and completed projects were eventually made available for use on future projects. However, it can take years from the time projects are cancelled or completed to the time they are closed out, which can result in an understatement of the amount of money NDF has available. NDF officials noted that prior to 2005, NDF took years to close completed and cancelled projects because it lacked the needed staff. However, NDF officials stated that since then, the office has hired additional staff and developed procedures to help ensure that projects are closed out more quickly. Additionally, NDF officials noted that the office has eliminated its backlog of projects needing to be closed. However, NDF still has 42 projects for which it has completed all financial close-out activities, but is in the process of seeking approval from the Under Secretary for Arms Control and International Security before closing them and returning the unexpended funds to the NDF account. These 42 projects have a total of over $19 million in unexpended funds that will be added to NDF’s unnotified balances, once they are closed. State has not conducted a program evaluation of NDF and lacks information that would be useful in doing so. A program evaluation is a systematic study to assess how well a program is working and that can identify lessons learned for future projects. State has developed a new policy requiring bureaus to evaluate programs, projects, and activities. To comply with this policy, State issued guidance requiring bureaus to submit an evaluation plan for fiscal years 2012 through 2014, identifying the programs and projects they plan to evaluate. However, ISN, which oversees NDF, did not include NDF in its fiscal years 2012 through 2014 evaluation plan. Moreover, State currently lacks information, such as the results of some projects and lessons learned, that could be used to conduct a program evaluation of NDF and that would help inform the management of the program. Since NDF became operational in 1994, State has not conducted a program evaluation of NDF, according to ISN and NDF officials. Although NDF reported to Congress in its fiscal year 2013 budget submission that all of its projects are evaluated in-house, these documents are project close-out monitoring reports and not evaluations. As State and other organizations have noted, monitoring and evaluations are conceptually and operationally different. GAO defines evaluation as individual, systematic studies that are conducted periodically or on an as- required basis to assess how well a program is working, while project close-out reports consist of formal documentation that indicates completion of the project or phase of the project. ISN and NDF officials explained that NDF and its projects have never been subject to a program evaluation because of the unique nature of each project. For example, according to NDF officials, to get one country to agree to dismantle its Scud missiles, NDF agreed to pay for that country’s armed forces to use a labor-intensive method to dismantle the missiles. However, NDF officials also noted that there are common features to many projects that can serve as the basis for lessons learned. Our analysis of NDF’s project database shows that since NDF’s first project in 1994, NDF has implemented a number of similar projects that could have been evaluated to determine the lessons learned for use in present and future projects. For example, NDF has implemented 11 destruction and conversion projects involving missiles and rockets, 5 of which involved the destruction of Scud missiles. The first of these missile destruction and conversion projects took place in 1994 and the latest began in 2010. In addition, as noted earlier in this report, NDF has implemented at least five projects involving the shutdown of a nuclear reactor in Kazakhstan. The Government Performance and Results Modernization Act of 2010 strengthened the mandate to evaluate programs, requiring agencies to include a discussion of evaluations in their strategic plans and performance reports. In part to comply with the requirements of this Act, State established a policy in February 2012 to evaluate programs and projects. In addition, as we reported in May 2012, according to officials from State’s Bureau of International Narcotics and Law Enforcement, the policy was established to comply with a June 2009 directive from the Secretary of State for systematic evaluation and to promote a culture change among program offices.February 2012 policy superseded an evaluation policy dating from September 2010 that did not fully comply with a recommendation later According to State officials, the detailed in State’s December 2010 Quadrennial Diplomacy and Development Review that State adopt an evaluation framework consistent with that of the U.S. Agency for International Development. State’s 2012 evaluation policy outlines requirements and provides a framework and justification for evaluations of all State programs, including both diplomatic and development programs, projects, and activities. For example, the policy notes that a robust, coordinated, and targeted evaluation policy is essential to State’s ability to measure and monitor program performance, document program impact, and identify best practices and lessons learned. It also states that such a policy can help assess return on investment and provide input for policy, planning, and budget decisions. State’s evaluation policy assigns a key role to the bureaus and requires them to evaluate two to four programs, projects, or activities over a 24- month period starting in fiscal year 2012 and all large programs, projects, and activities at least once in their lifetime or every 5 years, whichever is less. It also requires the bureaus to appoint a coordinator to ensure that the bureaus meet the new policy’s requirements; requires bureaus to develop and submit a bureau evaluation plan as an annex to their multiyear strategic plans, but gives bureaus flexibility in determining the specific programs to evaluate, as well as the timing and manner of evaluations they will perform; and notes that bureaus should integrate evaluation findings into decision making about strategies, program priorities, and project design, as well as into the planning and budget formulation process. State’s evaluation policy also draws a clear distinction between monitoring and evaluation. State defines monitoring as a continual process designed to assess the progress of a program, project, or activity. By comparison, evaluations go beyond monitoring to identify the underlying factors and forces that affect the implementation process, as well as the efficiency, sustainability, and effectiveness of the intervention and its outcomes. As our previous work, State, and other organizations have noted, evaluations also require a measure of independence. According to State, this can be promoted in several ways, including entrusting the evaluation to an outside research and evaluation organization or fostering a professional culture that emphasizes the need for rigorous and independent evaluations. To complement the new evaluation policy and provide further direction, State issued new guidance in March 2012 that describes several types of evaluations that bureaus can conduct and outlines data collection methods. The March 2012 guidance also defines the information that must be included in each bureau evaluation plan. For example, bureaus must include in the first plan a list of evaluations to be initiated or completed between fiscal years 2012 and 2014. Bureaus are expected to update these plans annually, according to State officials. In addition to the guidance, State has developed or is in the process of developing other resources and tools to complement and support the new evaluation policy. These include an internal website containing resources to assist bureaus with their evaluation responsibilities and the establishment of a community of practice where officials can share their expertise and discuss evaluation issues. ISN submitted its first bureau evaluation plan in April 2012, but the plan did not include any NDF projects. According to ISN officials, the bureau had a short amount of time in which to submit its bureau evaluation plan and for that reason the plan focused on programs that already had projects scheduled for evaluation. After the State evaluation guidance was finalized in late March 2012, the bureaus only had 1 month to submit their bureau evaluation plans for fiscal years 2012 through 2014, according to ISN officials. In canvassing ISN’s five program offices, ISN determined that some offices were already planning evaluations for certain projects within their programs, according to ISN officials and documents. For example, according to the ISN bureau evaluation plan, State’s Global Threat Reduction (GTR) Program plans to contract for four evaluations during the fiscal years 2012 through 2015 period. GTR has in the past contracted for evaluations of its projects in Iraq, Ukraine, and Russia. State currently lacks information that would be useful in conducting a program evaluation of NDF and in improving the management of its program. NDF uses project close-out reports to document its final monitoring of a project.the importance of preparing good monitoring reports since these both complement evaluations and can provide valuable information for use in preparing evaluations. They can also be a key source of information that can be used to improve the management of a program, such as the State’s March 2012 evaluation guidance notes results of a project and lessons learned. However, NDF’s project close- out reports did not document information that could be useful to NDF and the NDF Review Panel. The reports also varied in content and format. Project management standards note the importance of documenting results in project close-out documents, but not all of the project close-out reports that we examined discussed the results of the project. Of the 23 project close-out reports that we examined, 2 did not address project results at all. In addition, for the other 21, we found some instances where the discussions of results were fairly minimal and other instances where the reports did not state whether all intended outcomes or goals had been achieved. According to the Project Management Body of Knowledge Guide, a recognized standard for project managers, project close-out documents or reports should include formal documentation that indicates completion of a project, including results. Moreover, according to NDF officials, NDF and the NDF Review Panel consider potential results in determining whether to fund future projects. Project management standards note the importance of documenting project results and entering this information into a database of lessons learned. However, 13 of the 23 project close-out reports that we examined did not discuss lessons learned. Moreover, NDF officials stated that they did not have a database of lessons learned. To document and share lessons learned, NDF officials said that they primarily used informal mechanisms such as e-mails or face-to-face meetings. The Project Management Body of Knowledge Guide notes the importance of documenting lessons learned and entering this information into a lessons- learned database for use in future projects. Some agencies that implement projects or with an interest in communicating lessons learned have formal databases that they use to enter lessons learned and communicate this information to project implementers. For example, the U.S. Agency for International Development and the U.S. Army Center for Lessons Learned have both established lessons-learned databases.State Bureau of Budgeting and Planning officials told us that as part of its effort to implement the new evaluation policy, State is considering the establishment of a lessons-learned database that could include information from NDF. In addition, the close-out reports often did not address other criteria that the NDF Review Panel considers in assessing future projects for NDF funding. For example, 11 of the 23 project close-out reports that we examined did not discuss cost, and 17 of the 23 did not discuss the timeliness of the project. In one instance, the final cost of the project was approximately 66 percent under the amount notified to Congress, but the close-out report did not provide a reason why this had occurred. Of the 23 reports we examined, 19 did not discuss the appropriateness of using NDF funding for the project and none discussed the project’s return on According to the guidelines promulgated by State when investment. NDF was established in 1994, NDF criteria used to assess a project’s suitability for NDF funding include the cost and the appropriateness of using NDF as a source of funding. In addition, according to NDF officials, the NDF Review Panel also considers the project’s return on investment and timeliness as part of its criteria. Moreover, according to NDF officials, the NDF Review Panel has sometimes modified its initial assessment of a project’s cost based on past experience. Return on investment is a measure of the benefits gained by implementing a project. use of a standard format in project close-out reports might not always be appropriate or useful given the wide variety of projects that NDF funds and undertakes. However, it may be difficult to obtain information useful to future evaluations from reports that vary so significantly in content and format. Recognizing the need for NDF project managers to prepare a close-out report to ensure that information is consistently documented, in December 2010, NDF established the expectation that NDF project managers produce a project close-out report. NDF also produced a project management guide designed to encourage project managers to standardize their procedures. The NDF project management guide, which according to NDF officials is based on the Project Management Body of Knowledge Guide, among other things lists the preparation of a project close-out report as one of the steps for closing out a project. However, NDF officials stated in July 2012 that while project managers are expected to write project close-out reports, they are not required to do so. In addition, NDF officials stated that NDF encourages but does not require the use of the project management guide and the guide does not detail the information that project managers need to include in their reports or specify the report format. Partly in response to our work, NDF officials stated that they plan to develop standard operating procedures to address the issues we identified in the project close-out reports, which will also include a requirement for project managers to identify lessons learned. However, as of November 2012, they had not made any changes to their procedures. Over its lifetime, NDF has responded to pressing nonproliferation and disarmament needs, helping to address significant threats to international security. To support NDF in accomplishing its mission, U.S. law has provided NDF with an unusual degree of flexibility in how it manages its resources and conducts its work. While the critical nature of NDF’s mission provides a strong rationale for such flexibility, it also increases the need for State to effectively manage its program resources to ensure that NDF is achieving its intended results. However, State has not taken the necessary steps to do so. For example, unlike some programs, NDF lacks a formal process for determining how much carryover balance it needs to maintain in reserve to meet unanticipated program requirements. Without such a process, NDF cannot know to what extent its carryover balances, which have increased in the past few years to historically high levels, may be exceeding its unanticipated funding needs. In addition, NDF has taken years to close some projects, delaying the availability of unexpended funds for other projects and likely understating NDF’s carryover balances. A methodical process for determining NDF’s needed carryover balances and for closing projects could help ensure that NDF’s budget requests accurately reflect program needs. Additionally, NDF lacks a process to identify and incorporate lessons learned into future projects. State has never performed a program evaluation of NDF in its 18-year history to determine lessons learned for better designing projects that contribute to U.S. nonproliferation goals. State has implemented a new evaluation policy that could encourage the bureaus to more rigorously rationalize and prioritize their resources over time and identify and incorporate lessons learned. Nonetheless, State is not including NDF among the programs to be evaluated during fiscal years 2012 through 2014. Finally, NDF’s project close-out reports could provide useful information to inform future program evaluations’ identification of lessons learned that could be systematically incorporated into future projects. To more effectively manage NDF’s resources, increase program accountability, and ensure that NDF has the information necessary to improve program performance, we recommend that the Secretary of State take the following four actions: direct NDF to develop a methodology for determining the amount of reserves that it should carry over annually to meet program requirements to address unanticipated nonproliferation and disarmament opportunities; direct NDF to develop guidance for determining when inactive NDF projects should be closed and the remaining, unexpended funds made available for use on other projects; direct ISN and NDF to periodically and systematically conduct and document program evaluations of NDF; direct NDF to revise its project management guide to establish requirements for project managers’ close-out reports to include information useful for improving the management of NDF projects. We provided a draft of our report to DOD, DOE, OMB, and State for their review and comment. DOD and OMB did not provide comments. State provided written comments, which we have reprinted in appendix II. State concurred with all four of our recommendations and identified several actions it intends to take in response to the recommendations. For example, State said that it will direct NDF to develop a methodology that the NDF Review Panel can then use to make an annual recommendation on the appropriate level of carryover balances for the next fiscal year to the Under Secretary for Arms Control and International Security. State also said that NDF has begun implementing the recommendation to revise its project management guidance to establish requirements for close-out reports, by creating a standard operating procedure for these reports. State and DOE provided technical comments, which we incorporated in the report, as appropriate. We are sending copies of this report to interested congressional committees, the secretaries and agency heads of the departments addressed in this report, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9601 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. This report examines (1) the Department of State’s (State) use of Nonproliferation and Disarmament Fund (NDF) authorities in developing and implementing NDF projects and (2) the extent to which State has conducted a program evaluation of NDF and used this information to improve program performance. To assess how State has used NDF’s authorities in developing and implementing NDF projects, we obtained program-wide and project-level data from NDF’s Financial and Information Management System (FIMS) for fiscal years 1994 through 2012. To assess the reliability of data in FIMS, we reviewed NDF documentation on the system, reviewed previous audits that assessed the reliability of FIMS data, compared FIMS data to data from other sources to confirm FIMS data’s accuracy, and interviewed cognizant State officials. To gain additional information on the reliability of data in FIMS, we met with a private contractor conducting a review for NDF under the supervision of the State’s Office of the Inspector General. The scope of the contractor’s work included a review of the reliability of FIMS data. On the basis of the information we obtained, we determined that the FIMS data were sufficiently reliable for our purposes. We analyzed NDF program-wide data to determine program appropriations, commitments, obligations, and carryover balances for fiscal years 1994 through 2012. We analyzed NDF project data to determine project funding amounts, locations, objectives, and time frames for fiscal years 1994 and 2012. Additionally, we reviewed NDF project documentation including project proposals, approval memos, and congressional notifications, for all NDF projects initiated since the beginning of fiscal year 2007 to assess the types of projects NDF has funded and how it used its authorities in developing and implementing these projects. To gain additional information on NDF projects, we also reviewed State press releases, speeches by State officials, and fact sheets describing NDF activities. To identify NDF’s key legal authorities, we reviewed relevant laws and regulations, including the FREEDOM Support Act and NDF appropriations legislation for fiscal years 1994 through 2012. Additionally, we examined congressional committee and conference reports from 1999 through 2012 to identify relevant congressional guidance regarding NDF. We also reviewed key NDF documents discussing the program’s authorities, including the 1994 memorandum pursuant to the FREEDOM Support Act establishing the program and the accompanying NDF Guidelines. To gather additional information on NDF’s authorities and how it develops and implements projects, we conducted a series of interviews with NDF officials and also met with officials from other agencies that proposed or implemented NDF projects, including the Departments of Defense and Energy. We also interviewed officials from the Office of Management and Budget to gain additional information on NDF’s budget planning process. Finally, we reviewed previous GAO reports, as well as reports by the State Inspector General, the Congressional Research Service, and the National Academies of Science, to identify relevant findings regarding NDF and related U.S. nonproliferation and disarmament programs. To assess the extent to which State has evaluated NDF and used this information to improve program performance, we interviewed State officials with the Bureaus of International Security and Nonproliferation (ISN) and Budgeting and Planning. We also obtained copies of State’s February 2012 evaluation policy and March 2012 evaluation guidance, as well as a copy of ISN’s April 2012 bureau evaluation plan. NDF officials described their project close-out reports as evaluations, but based on our discussion with State ISN and Budgeting and Planning officials, our review of GAO reports discussing evaluations, and State’s February 2012 evaluation policy, we determined that NDF’s project close-out reports fit more closely the standard of a monitoring report. GAO defines evaluations as individual, systematic studies that are conducted periodically or on an as-required basis to assess how well a program is working. State’s evaluation policy notes that in addition to assessing the progress of a program, project, or activity, evaluations go beyond monitoring to identify the underlying factors and forces that affect the implementation process, as well as the efficiency, sustainability, and effectiveness of the program or project and its outcomes. As such, State’s policy draws a clear distinction between evaluation and monitoring. As previous GAO reports, State, and other organizations have noted, evaluations require a measure of independence, which can be promoted in several ways, including entrusting the evaluation to an outside research and evaluation organization or fostering a professional culture that emphasizes the need for rigorous and independent evaluations. By comparison, State defines monitoring as a continual process designed to assess the progress of a program, project, or activity. The Project Management Body of Knowledge Guide notes that project close-out documentation consists of formal documentation indicating the completion of a project or phase of a project. For all these reasons, on the basis of our analysis of NDF’s project close-out reports, we made the determination that NDF’s project close-out reports better fit the standard of a monitoring report than an evaluation. While project close-out reports serve a different purpose from evaluations, based on our review of the Project Management Body of Knowledge Guide and NDF’s Project Management Guide, we determined that we could assess the project close-out reports to determine their usefulness in enabling NDF to improve its management of the program. For this purpose, we obtained a judgmental sample of 23 project close-out reports—14 of which we selected and 9 of which State selected. In selecting our sample, we chose only to consider projects that NDF had closed out since the beginning of fiscal year 2007—of which there were 61—in order to ensure that all close-out documentation was completed for the projects. Our selection criteria for our sample included project cost, location, and type. For example, we selected a variety of projects from all four categories of projects that NDF funds—destruction and conversion, safeguards and verification, enforcement and interdiction, and education and training. State selected its projects using similar criteria; however, State did not limit itself to projects that were closed out. In some cases, State selected projects where work was completed, but the project was not yet officially closed out. In reviewing the documentation for the projects State selected, we determined that these projects were broadly similar to the ones that we selected and the inclusion of these projects in our analysis did not alter our overall findings or compromise the independence of our work. To conduct our analysis of the close-out reports, we developed a list of key terms, such as “results,” “completion,” and “lessons learned.” Our inclusion of these terms was based on our analysis of project management standards, which note the importance of the project close- out process in the project management cycle and the importance of obtaining information about the results of the project and lessons learned. We also included other terms such as “cost,” “timeliness,” “on time,” “return on investment,” and “appropriateness of using NDF funding.” We included these terms because NDF officials told us that NDF and the NDF Review Panel include these criteria in determining a project’s suitability for NDF funding. Because NDF does not have any requirement to use a standard terminology in its reports, we used a dictionary to obtain other synonyms of these terms as well. We examined each of the project close- out reports to determine the presence of these key terms. We also examined each of the project close-out reports to determine the author, content, and format. We did this on the basis of discussions with NDF officials, who told us that they had established an expectation that NDF project managers complete a project close-out report and had developed a project manager’s guide that contained a checklist. While NDF does not have a requirement for project reports to be written in a standard format, we determined that the close-out reports that we had examined varied widely in their content and format and concluded that such variety could make it more difficult for evaluators to extract key information from these reports. After completing our initial review, the lead analyst submitted the results of his work and the methodology used to two additional levels of review. These reviewers were asked to validate the methodology and results. The sample of 23 project close-out reports cannot be generalized to the entire population of NDF project reports for the period in our review. We also reviewed NDF’s Project Management Guide to determine the extent to which NDF has established specific requirements or guidance regarding how project close-out reporting should be conducted. To obtain the list of 11 similar missile destruction and conversion related projects, we conducted a word search of NDF’s projects using the key terms “missiles” and “rockets.” We conducted this performance audit from March 2012 through November 2012 in accordance with generally accepted government auditing standards. These standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, the following staff made key contributions to this report: Jeff Phillips, Assistant Director; Lynn Cothern; Martin De Alteriis; Mark Dowling; José M. Peña, III; and Ryan Vaughan. Etana Finkler and Jeremy Sebest provided graphics support and Debbie Chung provided editorial assistance. Julie Hirshen and Julia Jebo Grant also provided additional technical assistance.
The proliferation of weapons of massdestruction and advanced conventionalweapons poses significant threats toU.S. and international security. State’sNDF began operating in 1994 to helpcombat such threats by funding a variety of nonproliferation and disarmament projects. NDF’s legal authorities provide it significant flexibility to perform its work and it has initiated high-profile projects in locations that are significant to U.S. interests. Nonetheless, questions have been raised about how NDF has used its authorities, including its authority to carry over balances into future fiscal years, and the extent to which NDF is effectively implementing its activities. This report examines (1) State’s use of NDF authorities in developing and implementing NDF projects and (2) the extent to which State has conducted a program evaluation of NDF and used this information to improve program performance. To conduct this review, GAO analyzed NDF program and project data and documentation, analyzed a sample of NDF project close-out documents, and interviewed NDF and other U.S. officials. The Department of State's (State) Nonproliferation and Disarmament Fund (NDF) has several key authorities that provide it significant operational flexibility; however, it has not determined its needed carryover balances and it has taken years to close out many of its projects in the absence of guidance for closing them. Annual appropriations bills have consistently provided NDF with three key authorities that it has used to carry out its activities. First, NDF has the authority to undertake projects notwithstanding any other provision of law. NDF has used this authority to fund projects in countries, such as North Korea, where U.S. assistance is prohibited by U.S. sanctions and other legal restrictions. Second, NDF has the authority to undertake projects globally. NDF has used this authority to fund projects in numerous regions around the world, in contrast with other U.S. nonproliferation programs, which have historically focused on countries in the former Soviet Union. Third, NDF's appropriations do not expire within a particular time period, enabling NDF to carry over balances from year to year not designated for specific projects. However, NDF has not determined appropriate levels for these balances, which increased significantly in the past few years. Additionally, NDF has sometimes taken many years to close projects, including those where work was never started or was suspended, and has not established criteria to determine when inactive projects should be closed and unexpended resources made available for other projects. As a result, NDF funds may be tied up for years in inactive projects, precluding the funds' use for other projects. State has never conducted a program evaluation of NDF. In February 2012, State developed a policy requiring bureaus to evaluate programs, projects, and activities, and outlined the requirements for these evaluations. As part of this policy, State required bureaus to submit an evaluation plan for fiscal years 2012 through 2014 that identified the programs and projects they plan to evaluate. However, the Bureau of International Security and Nonproliferation (ISN), which oversees NDF, did not include NDF in its fiscal years 2012 through 2014 evaluation plan. State currently lacks information that could be used to conduct a program evaluation and to improve NDF's management of the program. Project close-out reports are critical to the process of closing out a project and identifying lessons learned, but NDF project close-out reports do not contain information that could enable NDF to better manage its program. For example, not all closeout reports address the results of the project. NDF uses e-mails and face-to-face meetings to communicate lessons learned without documenting them. Established standards suggest that these should be transferred to a database of lessons learned for use in future projects and activities, an action State officials said they are considering taking. NDF has also produced a project management guide to encourage project managers to use standard procedures and write close-out reports, but does not require the use of this guide. In addition, the guide does not detail a format for project managers to use in preparing their close-out reports or list the information that project managers must address. NDF officials said they plan to develop standard operating procedures to address these issues, but had not done so as of November 2012. GAO recommends that State (1) develop a methodology for determining the amount of carryover reserves needed to meet program requirements, (2) develop guidance for determining when inactive NDF projects should be closed out, (3) conduct periodic program evaluations of NDF, and (4) establish requirements for the types of information to be included in project close-out reports. State agreed with the recommendations.
Before 1978, the former Civil Aeronautics Board regulated airlines, controlling the fares they could charge and the routes they could fly. Concerned that these practices caused economic inefficiencies and inhibited the growth of domestic air transportation, the Congress deregulated the industry in 1978. Deregulation was expected to result in fares that more accurately reflected airlines’ costs and, overall, more vigorous competition throughout the nation. Since deregulation, numerous new airlines have started operations, while established airlines have expanded into new markets. Many new airlines that began operations shortly after deregulation have failed, as have some long-established carriers, such as Eastern and Pan Am. Nevertheless, a few airlines that were formed in the wake of deregulation still operate, including America West and Midwest Express. In the early 1990s, over a decade after the industry was deregulated, a second wave of new airlines emerged. Airlines such as Vanguard, Spirit, AirTran, and Frontier now compete with established carriers in selected markets throughout the United States. These new entrants’ cost structures tend to be lower than those of their established competitors, permitting them to charge lower fares to a variety of destinations. In recent years, we have reported that these airlines’ ability to enter and compete in selected domestic markets has resulted in lower fares and better service in these markets. However, we also found that many other communities have not yet experienced vigorous competition and have not realized these fare and service-quality benefits. In 1990, we reported that from 1979—the earliest year for which reliable data on fares were available—through 1988, the average fare per passenger mile, adjusted for inflation, declined by 9 percent at airports serving small communities, 10 percent at airports serving medium-sized communities, and 5 percent at airports serving large communities. In 1996, we reported that the average fare per passenger mile, adjusted for inflation, continued to fall across all sizes of communities but that regional variations were evident. The largest decreases in fares since deregulation occurred at airports located in the West and Southwest, and increases in fares were noted at airports located in the Southeast and in the Appalachian region. The quantity of service, as measured by the number of both departures and available seats, had increased for all airport groups. The quality of service, as measured by factors such as the number of destinations served nonstop and the type of aircraft used, showed mixed results, especially for airports serving small and medium-sized communities. In 1996, we also reported that three types of “operating barriers” discouraged entry by airlines at several major U.S. airports. First, from 1990 through 1996 a few established airlines had markedly increased their combined control of takeoff and landing times (slots) at airports in Chicago, New York, and Washington. As a result, little new entry had occurred at these airports during this period. Second, long-term, exclusive-use gate leases at six other major airports prevented airlines that did not serve those airports from securing the necessary facilities to begin service and compete on equal terms with incumbent airlines. Third, the federal perimeter rule barring nonstop flights exceeding 1,250 miles exacerbated the impact of slots by preventing airlines from gaining entry into Reagan Washington National Airport. For all sizes of communities, average airfares have continued the decline noted in our 1996 report. Average airfares (expressed in constant dollars and in cents per mile) fell about 21 percent in constant dollars from 1990 through the second quarter of 1998. On average, airports serving medium-large communities had the greatest decrease in fares, and airports serving small communities, the smallest decline. However, such averages conceal large variations within the sizes of communities. For example, for passengers flying to or from airports in communities of similar size on trips of similar distances in 1998, one passenger traveling from one airport may have paid almost 3 times as much as a passenger traveling from a different airport. Our review of changes in airfares from 1990 through the second quarter of 1998 indicates that the trends of moderately decreasing average airfares identified in our earlier reports continued at airports serving most communities. Of the 171 airports we examined over the period, average airfares declined at 168. At some airports, the decrease was especially large. For example, at 22 airports, average fares declined by 30 percent or more in constant dollars. At many airports, the decline coincided with the introduction of competing service, often from a low-fare carrier, which, in most cases, was Southwest Airlines. Figure 1 shows the cities in which airfares have declined by the greatest percentage since 1990. At 3 of the 171 airports we examined, average airfares have increased since 1990. These airports serve Duluth, Minnesota (+2.3 percent); Fargo, North Dakota (+0.8 percent); and Dallas, Texas (Love Field, +7.4 percent). At each of these airports, generally only a single airline provided service. Northwest Airlines dominates Duluth and Fargo, and Southwest dominates Dallas Love Field. We were not able to examine in detail each market to determine what factors may have contributed to the decrease in average fares. For example, we were unable to account for differences at airports where competition—and thus airfares—on individual routes may vary widely. On routes out of St. Louis where low-cost airlines offer competing service, fares may be considerably lower than on other routes from the same airport where no such competition exists. Whether the overall average airfare for the airport may have increased or decreased over time depends on the number of passengers flown on all of those routes and the fares they paid. Similarly, we were not able to examine differences in the extent to which certain destinations (such as Las Vegas or Orlando) tend to be more heavily dominated by leisure travel than by business travel. Leisure travel tends to be more price-sensitive, and average airfares in those markets thus tend to be lower than those where there is more business travel. Because some significant changes can occur over the span of nearly 9 years, we examined fare changes from 1990 through 1993 and then from 1994 through the second quarter of 1998. Table 1 summarizes the change in average airfares over the period for airports serving each size of community, according to the length of the passengers’ trips. Although average airfares decreased for most communities throughout the period, since 1994 average airfares increased for certain segments of the traveling public—mostly for passengers making short trips to or from medium-large and large communities. For the 171 airports we examined, from 1994 through 1998 average airfares decreased at 132 airports, suggesting that most communities—small, medium-sized, medium-large, and large—and travelers from those communities have benefited from deregulation. Average fares for passengers making certain trips to or from several airports dropped by more than 50 percent over the period. Average fares for short-and long-haul trips from St. Petersburg, Florida; medium-haul trips from Dallas Love Field; and long-haul trips from Mission, Texas, and Grand Junction, Colorado, decreased from 52 to 77 percent. Fares in some of those markets appear to have been influenced by the introduction of additional competition, especially from low-cost airlines. During the same period, however, average fares increased at 39 airports—13 serving small communities, 4 serving medium-sized ones, 9 serving medium-large ones, and 13 serving large ones. In most cases, the control of a large percentage of the airports’ passengers by a single airline contributed to the increase in fares. Of the 13 airports serving small communities, 12 were served by an individual airline that controlled at least 40 percent of the traffic. Of the four airports serving medium-sized communities, three were dominated by an individual airline that carried more than 40 percent of the traffic. Of the nine airports serving medium-large communities, seven were dominated by an individual airline that carried more than 40 percent of the traffic. Of the 13 airports serving large communities, seven are hub facilities for major airlines. For example, the average fares for passengers making short trips to or from Greensboro, North Carolina; Roanoke and Norfolk, Virginia; Charleston, South Carolina; and Buffalo, New York, all increased by 30 percent or more from 1994 through 1998. Low-cost airlines, such as AirTran, American Trans Air, or Southwest, served none of the 17 airports at small or medium-sized communities in 1998. For the 22 airports serving medium-large and large communities where average airfares increased since 1994, individual low-cost airlines had market shares in 1998 that exceeded 10 percent at only four–Houston Hobby Field, Dallas Love Field, Cleveland Hopkins International Airport, and Midland/Odessa, Texas. At each of those airports, the low-cost airline was Southwest. Figure 2 shows the location of these 39 airports, most of which are in the East and Southeast. For passengers flying to or from airports serving communities of similar sizes on trips of similar distances, the fare at one airport can cost almost 3 times as much per mile flown as the fare at a different airport. For example, passengers flying to or from Las Vegas in 1998 paid, on average, 9 cents per mile, while passengers flying to or from Charlotte paid 28 cents per mile. Moreover, passengers flying to or from airports serving small and medium-sized communities in 1998 paid, on average, over 12 percent more than the national average airfare. Similarly, passengers flying to or from airports serving large communities in 1998 paid, on average, over 8 percent more than the national average. Appendix II summarizes the changes in average airfares for each of the cities we examined during this review. Our review of air service quality factors for scheduled airline departures from May 1978 though May 1998 indicates that the overall quality at most communities served by the airports we reviewed has improved since deregulation. However, the extent to which the overall quality of air service has improved for the 171 airports that we reviewed varies by the size of the community served. In general, airports serving larger communities have benefited from a greater increase in the overall quality of air service—the number of departures and seats, jet departures, and destinations served by nonstops—than those serving smaller communities. For example, 90 percent of airports serving large and medium-large communities had an increase in both departures and available seats compared with 45 percent of the airports serving small and medium-sized communities. Assessing the trends in the overall quality of air service is difficult because many factors contribute to the quality of service. This assessment requires, among other things, a subjective weighting of the relative importance of each measure that is generally considered a dimension of quality. In assessing the overall quality of air service received by each sized community included in our study, we used four commonly accepted measures, including the number of (1) departures, (2) available seats, (3) destinations served by nonstop and one-stop flights, and (4) jet departures compared with the number of turboprop departures. (We used these same measures in our earlier reports.) Nonstop service is generally considered to be preferable to flights requiring a stop, and jet aircraft are preferred over turboprop aircraft. Most communities served by the airports we reviewed had more commercial departures in 1998 than they did in May 1978. During this period, departures increased at 139 of the 171 airports we reviewed. Increases were most likely to occur at airports serving larger communities. All airports at large communities, with the exception of Reagan Washington National Airport (where the number of takeoff and landings is restricted by federal law), and most airports serving medium-large communities had an increase in departures. In comparison, 56 of the 84 airports in small and medium-sized communities had an increase in departures. From 1978 through 1998, 118 of the 171 airports we reviewed had an increase in the number of available seats, especially those airports serving larger communities. Overall, for airports in large and medium-large communities, the number of available seats increased by about 87 percent. Every airport serving large communities and all but 7 of the 42 airports in medium-large communities experienced an increase. For almost one-quarter of the airports serving large communities, such as Phoenix’s Sky Harbor Airport and Houston’s Hobby Airport, this increase exceeded 200 percent. In contrast, slightly less than half of the airports at small and medium-sized communities in our review had an increase in seats, although about 67 percent had an increase in departures. To some extent this difference can be attributed to the substitution of more frequent service from smaller turboprops for fewer departures of larger jets. Since 1978, the airport serving Champaign, Illinois, for example, had a 66-percent increase in the number of departures and a 34-percent decrease in the number of seats. During this same time period, jet service from this airport was eliminated and replaced entirely with propeller aircraft. In addition, 27 of the 84 airports serving small and medium-sized communities experienced a decline in both scheduled departures and available seats. These 27 airports were largely concentrated in the upper Midwest—including Lincoln, Nebraska; Rochester, Minnesota; and Bismarck, North Dakota—and the South—including Daytona Beach, Florida; Montgomery, Alabama; and Shreveport, Louisiana. Figure 3 summarizes the percent change in the number of scheduled departures and number of available seats for each category of community. Appendix III contains the information on the number of departures and available seats for each of the 171 airports that we reviewed for May 1978 and May 1998. Airports serving large and medium-large communities have been the primary beneficiaries of increased nonstop flights. Nonstop flights increased for 71 percent of the airports serving large and medium-large communities but only for 25 percent of the airports serving small and medium-sized communities. Of the 84 airports at small and medium-sized communities that we reviewed, 37 experienced a decline in both nonstop and one-stop service. Only airports serving medium-large communities experienced an increase in one-stop flights. Figure 4 summarizes the percent change in the total number of destinations served by nonstop and one-stop flights by category of community. Appendix IV provides detailed information for each community on the number of destinations served by nonstop and one-stop flights for May 1978 through May 1998. Overall, all sizes of communities experienced an increase in the number of turboprop departures, but primarily airports serving large and medium-large communities benefited from an increase in the number of jet departures. Of these airports, 75 percent had an increase in jet departures compared with 24 percent of the airports in small and medium-sized communities. Overall, the actual number of jet departures increased by 72 percent at airports serving large communities and by 57 percent at airports serving medium-large communities but declined by 6 percent at airports serving medium-sized communities and by 14 percent at airports serving small communities. Figure 5 summarizes the percent change in the number of jet departures by size of community. Appendix V provides detailed information for each airport for May 1978 through May 1998. In 1997, over 143 million passengers (23 percent of the total U.S. domestic enplanements that year) traveled through 10 key airports in the east and upper Midwest. In the past, we reported that competition is constrained at these airports because of long-term gate leases or limits on the number of available takeoff and landing slots. During our review, we found that the six airports we had previously described as gate-constrained—Charlotte, Cincinnati, Detroit, Minneapolis, Newark, and Pittsburgh—continue to be predominantly served by one airline. Airport officials and airline representatives said that gates are available to airlines that do not currently serve those airports. However, few of those airlines expressed interest in serving those markets because access to facilities remains difficult and other factors, generally relating to the size of the incumbent carrier and its associated market strength, prevent them from entering at these airports. At the four slot-constrained airports—Chicago O’Hare, New York’s LaGuardia and Kennedy, and Reagan Washington National—established airlines hold the majority of slots, while the share of slots held by airlines started after deregulation remains low. Finally, the federal perimeter rule, which prohibits flights longer than 1,250 miles from Reagan Washington National Airport, continues to deprive certain airlines from serving that airport from some of their hub operations, preventing millions of passengers in western states from gaining nonstop access to the airport. Restrictive gate leases are a barrier to establishing new or expanded service at some airports. These leases permit an airline to hold exclusive rights to use most of an airport’s gates over a long period of time, commonly 20 years. Previously, we reported that such leases made it more difficult for nonincumbents to secure necessary airport facilities on equal terms with incumbent airlines. Airlines established after deregulation, especially new entrant airlines, said access to facilities at some airports—Charlotte, Cincinnati, Detroit, Minneapolis, Newark, and Pittsburgh—was difficult. Airport officials and one airline told us that other marketing factors—not gate-leasing arrangements—acted as barriers to entry. As table 2 shows, the vast majority of gates at each of these airports continue to be leased to one established airline. Airport officials at Charlotte, Cincinnati, and Minneapolis said that it is in the best interest of the airports to lease gates over a long term to maintain a stable stream of revenue. For example, Cincinnati airport officials said they depend on signatory airlines to pay their debt obligations. Delta Air Lines—which dominates the Cincinnati airport and holds 50 of the airport’s total 68 jet gates—financed the construction of 43 of those gates. Officials at each of the airports we visited said they have spoken with or actively recruited nonincumbent airlines to provide new service. Airport officials and one airline official told us that other factors, rather than restrictive gate leases, prevented nonincumbents from providing service at their airports. These factors included the size of the incumbent carriers (coupled with those airlines’ marketing strengths, such as their frequent flyer programs, corporate discounts, and arrangements with local travel agents), the fear of perceived predatory conduct by the major incumbent carrier, and a lack of adequate capitalization. A Charlotte airport official said that the term “gate-constrained” no longer applied, given their flexibility in making some gates available for lease and the airport’s willingness to discuss new service with interested airlines. A limited number of gates are available for new service at three of the airports we visited, although they may not be available at the times or days that new airlines might prefer. Airport officials at Detroit, Minneapolis, and Newark said there are no gates available now. For the three airports that have available gates, however, incumbent airlines tended to use them. For example, Pittsburgh airport officials said that they have a total of seven jet gates available, but US Airways is the only airline that uses them at this time. In addition, Cincinnati airport officials said there are three gates leased by US Airways that are not being used as fully as they could be. Officials from airlines that started after deregulation told us that access to facilities was difficult at some airports, including Newark. These airline officials cited a lack of cooperation by airport officials in identifying available gates and the reluctance of both the airports and incumbent airlines in offering leases or subleases longer than on a short-term basis. Major established airlines have expanded their holdings of domestic air carrier takeoff and landing slots at three of the four slot-constrained airports—Reagan Washington National, New York Kennedy, and New York LaGuardia. Only at Chicago O’Hare did the level of slot concentration held by major established airlines decrease slightly from 1996 to 1999. By contrast, the share held by airlines that started after deregulation remains low. (See table 3.) Our October 1996 report recommended that DOT redistribute some slots to increase competition, taking into account the investments made by those airlines at each of the slot-controlled airports. DOT subsequently began to use the authority that the Congress gave it in 1994 to allow additional slots at O’Hare, LaGuardia, and Kennedy.Through January 1999, DOT granted 62 slot exemptions at O’Hare, 30 at LaGuardia, and 6 at Kennedy. DOT has also granted a total of 48 exemptions for Essential Air Service and 19 for seasonal international service. The ability of certain nonincumbent airlines to begin service at New York LaGuardia and Reagan Washington National airports is further limited by rules that prohibit incoming and outgoing flights that exceed a certain distance (commonly known as perimeter rules). At LaGuardia, under a rule established by the Port Authority of New York and New Jersey, nonstop flights exceeding 1,500 miles are prohibited. At Reagan Washington National, federal law limits the number of hourly operations and prohibits nonstop flights exceeding 1,250 miles. The perimeter rules were originally designed to promote Kennedy and Dulles airports as the designated long-haul airports for the New York and Washington metropolitan areas, respectively, and to alleviate air traffic congestion in those areas. The practical effect, however, has been to limit entry and exacerbate the impact of slots. Specifically, because of their proximity to Reagan Washington National, each of the seven largest established carriers is able to serve the airport from its principal hub. By contrast, the rules prevent the second largest airline started after deregulation—America West—from serving LaGuardia and Reagan Washington National from its hub in Phoenix and restrict other airlines with hub operations in the West from serving either airport on a nonstop basis. Thus, for example, the 92 million passengers that flew out of Los Angeles, Phoenix, Portland, Salt Lake City, San Francisco, and Seattle airports in 1997 could not fly nonstop into Reagan Washington National. Officials with Delta Air Lines told us that they would expand service from Salt Lake City to Reagan Washington National if the perimeter rule was relaxed or abolished. We recognize that the communities where the airports are located will be concerned with any proposals to grant additional slots because of potential congestion, noise, and safety problems. These are sensitive issues, and, ultimately, any final decisions about them can be best resolved through congressional deliberations. Airfares at the six gate-constrained and four slot-constrained airports were consistently higher than airfares at nonconstrained airports that serve similar-sized communities, especially in short- and medium-haul markets. In other words, passengers pay a premium to fly to and from these airports. In 1998, overall weighted average fares ranged from being 4 percent higher at Kennedy Airport to 83 percent higher at Pittsburgh International Airport compared with fares at nonconstrained airports serving communities of comparable size. The greatest differences in airfares in 1998 were in short-haul markets. The average airfares of short-haul markets in 1998 ranged from 29 percent higher in Kennedy to 120 percent higher at Pittsburgh. In medium-haul markets, airfares ranged from 15 percent lower at Detroit to 63 percent higher at Charlotte. In long-haul markets, airfares ranged from 6 percent lower at Reagan Washington National to 42 percent higher at Charlotte. Table 4 summarizes the differences in average airfares between the 10 constrained airports and other airports serving communities of comparable size for 1998. Airfares have continued to decline for all sizes of communities since deregulation, although average airfares have increased for certain segments of the traveling public, especially since 1994. Similarly, the overall quality of air service has improved except for that in some small and medium-sized communities. Since deregulation, a number of major airlines have dominated operations at 10 key airports leading to constrained competition and higher airfares. Slots and the federal perimeter rule continue to exacerbate the impacts of barriers by limiting the number of landings and takeoffs and prohibiting incoming and outgoing flights that exceed a certain distance at certain airports. Thus, while deregulation continues to benefit the majority of the nation’s travelers, there remain some communities where those benefits have not been realized. We provided DOT with copies of a draft of this report for its review and comment. We spoke with DOT officials from the Office of the Secretary, including the Deputy Assistant Secretary for Aviation and International Affairs. DOT generally agreed with the information in the report and provided a number of comments to clarify issues addressed in the report; we incorporated these comments as appropriate. As arranged with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 10 days after the date of this letter. At that time, we will send copies to the Secretary of Transportation; the Director, Office of Management and Budget; and other interested parties. We will send copies to others upon request. We conducted our work from November 1998 through February 1999 in accordance with generally accepted government auditing standards. If you have any questions, please call me at (202) 512-2834. Major contributors to this report are listed in appendix VI of this report. To analyze changes in airfares since 1990, we reviewed data on fares covering the period 1990 through the second quarter of 1998 (the most current information available at the time of our work). To provide consistent, comparable information in updating our prior report on trends in airfares since deregulation at airports serving small, medium-sized, and large communities, we reviewed data on the same 112 airports that we examined in our two prior reports. We selected those airports using the following criteria: All of the airports were in metropolitan statistical areas or (1) an area that included at least one city with 50,000 or more inhabitants and (2) an area with an urbanized area as defined by the Census Bureau (with at least 50,000 inhabitants) and a total metropolitan population of at least 100,000 (75,000 in New England). Small communities were those with populations in a metropolitan statistical area of 300,000 or less, medium-sized communities were those with populations of 300,001 to 600,000, and large communities were those with populations of 1.5 million or more. In our prior reports, we used 1984 U.S. Census data to provide information on community sizes midway between the years reviewed (1979, 1984, and 1988) for each airport location. While keeping the same sample of airports for this report, we reviewed U.S. Census data for 1996 to identify changes in communities’ populations. We did this to ensure that, had some populations changed significantly since our previous report, we would compare those communities with others of similar size. Almost all of the airports were among those with the largest 175 enplanements in the nation, as determined by the number of passenger enplanements in 1997. This criterion was necessary because as an airport’s rank falls, the number of tickets from that airport in the Department of Transportation’s (DOT) “Passenger Origin-Destination Survey” declines. A smaller number of tickets per route increases the potential for sampling error and may result in calculations that are not representative of the airport’s overall traffic. All of the airports were located within the 48 contiguous states because airports outside the contiguous states are often special cases. Travel from airports located in Alaska, Hawaii, Puerto Rico, and the Virgin Islands is often for very short distances (between islands) and very long distances (between Alaska or Hawaii and the contiguous states) or may take the place of ground transportation (between cities in Alaska). In addition, we added several airports in communities that had not been included in the previous reports. In general, these are airports that are also included within the largest 175 airports located in the continental United States serving medium-large communities with populations between 600,001 and 1.5 million. We excluded Orlando/Sanford airport because origin and destination data from 1991 to 1998 were lacking, and we excluded North Las Vegas Field because it had unusually high fares. We added four other cities—Albany, Huntington, Rochester, and Syracuse—following discussions with the staffs of Representative William O. Lipinski and Representative Peter A. DeFazio for further insight into fares and service for airlines serving small and rural communities. We obtained the data on airfares from a private contractor, Data Base Products, Inc., which gets its original data from DOT. Data Base Products, Inc., makes a number of revisions to the data submitted to DOT by the airlines to correct for biases and obvious reporting errors. Data Base Products, Inc., also incorporates data from main airlines’ regional commuter partners, thereby providing a more complete picture of passengers’ true itineraries and costs. To enhance the comparability of the data, we converted the airfare information into constant 1998 dollars.Because the number of passengers traveling on routes can change over time, examining fares at two different times could reflect differences in the number of travelers going to various destinations rather than fare changes. Therefore, as with our prior reports, we held the distribution of passengers across distance categories constant at the level found with the latest four quarters ending with the second quarter of 1998. To add to the information that we published in our previous reports, we also calculated averages for travel of various distances to or from these airports. We believe that additional information provides a greater context than the basic average fare. We recognize that few if any passengers may actually have paid an “average fare” in any one market but believe that such averages provide insightful information for analyzing broad trends in airfares over time. Because we analyzed data that were drawn from a statistical sampling of tickets purchased, each estimate developed from the sample has a measurable precision, or sampling error. The sampling error is the maximum amount by which the estimate obtained from a statistical sample can be expected to differ from the true universe value. We did not calculate the sampling error for each airport’s fare estimates during this update because the sampling errors calculated in the previous two reports were consistently small. We believe that the same approximate sampling errors would apply to the estimates developed for this review. To analyze changes in the quality of air service for these same 171 airports, we obtained data on scheduled airline service from DOT’s Bureau of Transportation Statistics. We used these data to analyze changes from 1978 through 1998 in four measures of the quality of service that we reported in the past. Those measures are (1) the total number of scheduled nonstop departures from each airport, (2) the total number of seats available on those flights, (3) the number of scheduled destinations served by nonstop and one-stop flights from each airport, and (4) the number of scheduled jet and turboprop departures at those airports. To reduce “seasonality” associated with air travel (that is, to avoid having the data reflect higher amounts of travel associated with summer vacations or reduced winter travel), we used information from May 1978 and May 1998. Finally, to determine whether certain airport limitations that we had previously identified continued to restrict competition, we visited those airports to update our work concerning markets restricted by gate, slot, or perimeter barriers to domestic airline markets. Specifically, we conducted interviews with representatives of Charlotte-Douglas International Airport, Cincinnati-Northern Kentucky International Airport, Detroit Wayne County Airport, Minneapolis-St. Paul International Airport, and Pittsburgh International Airport. We held a formal teleconference with officials representing Newark International Airport. We obtained additional information and perspectives on barriers to entry from officials representing Access Air, Continental Airlines, Delta Airlines, Eastwind Airlines, Legend Airlines, Northwest Airlines, US Airways, Spirit Airlines, and Vanguard Airlines. To discuss the effect that the perimeter rule may have on competition, we met with officials representing the Washington Metropolitan Airport Authority, which oversees both Reagan Washington National Airport and Washington Dulles International Airport. We also analyzed airfare data for these airports by comparing their average fares against those for communities of comparable size but excluding the other constrained airports. Because each airport has a different distribution of flight lengths, we made comparisons within each of the three distance categories. To get an overall comparison for each of the 10 constrained airports, we then took a weighted average of the comparisons within each distance category. The resulting percent differences are therefore adjusted for distance as well as for particular passenger distributions at each airport. Avg. Avg. Avg. Avg. Avg. Avg. Avg. Avg. Cents per passenger mile (constant dollars) (continued) Avg. Avg. Avg. Avg. Avg. Avg. Avg. Avg. Avg. Avg. Cents per passenger mile (constant dollars) (continued) Avg. Avg. Avg. Avg. Avg. Avg. Avg. Avg. Avg. Cents per passenger mile (constant dollars) (continued) Avg. Avg. Avg. Avg. Avg. Avg. Avg. Avg. Avg. Cents per passenger mile (constant dollars) (continued) Avg. Avg. Avg. Avg. Avg. Avg. Avg. Avg. Cents per passenger mile (constant dollars) (continued) Avg. Avg. Avg. Avg. Avg. Avg. Avg. Avg. Avg. Cents per passenger mile (constant dollars) (continued) Avg. Avg. Avg. Avg. Avg. Avg. Avg. Avg. Avg. Avg. Cents per passenger mile (constant dollars) (continued) Avg. Avg. Avg. Avg. Avg. Avg. Avg. Avg. Avg. Cents per passenger mile (constant dollars) (continued) Avg. Avg. Avg. Avg. Avg. Avg. Avg. Avg. Avg. Cents per passenger mile (constant dollars) (continued) Avg. Avg. Avg. Avg. Avg. Avg. Avg. Avg. Avg. Cents per passenger mile (constant dollars) (continued) Avg. Avg. Avg. Avg. Avg. Avg. Avg. Avg. Avg. Cents per passenger mile (constant dollars) (continued) Avg. Avg. Avg. Avg. Avg. Avg. Avg. Avg. Avg. Cents per passenger mile (constant dollars) (continued) Avg. Avg. Avg. Avg. Avg. Avg. Avg. Avg. Avg. Cents per passenger mile (constant dollars) (continued) Avg. Avg. Avg. Avg. Avg. Avg. Avg. Avg. Avg. Large-community airports Cents per passenger mile (constant dollars) (continued) Avg. Avg. Avg. Avg. Avg. Avg. Avg. Avg. Avg. Cents per passenger mile (constant dollars) (continued) Avg. Avg. Avg. Avg. Avg. Avg. Avg. Avg. Avg. Cents per passenger mile (constant dollars) (continued) Avg. Avg. Avg. Avg. Avg. Avg. Avg. Avg. Cents per passenger mile (constant dollars) (continued) Avg. Avg. Avg. Avg. Avg. Avg. Avg. Avg. Avg. Avg. Cents per passenger mile (constant dollars) (continued) Avg. Avg. Avg. Avg. Avg. Avg. Avg. Avg. Cents per passenger mile (constant dollars) (continued) Avg. Cents per passenger mile (constant dollars) Total seats, May 1998 (continued) Total seats, May 1998 (continued) Total seats, May 1998 (continued) Total seats, May 1998 (continued) Total seats, May 1998 (continued) Nonjet departures, May 1998 (continued) Nonjet departures, May 1998 (continued) Nonjet departures, May 1998 (continued) Nonjet departures, May 1998 (continued) Aviation Competition: Effects on Consumers From Domestic Airline Alliances Vary (GAO/RCED-99-37, Jan. 15, 1999). Aviation Competition: Proposed Domestic Airline Alliances Raise Serious Issues (GAO/T-RCED-98-215, June 4, 1998). Domestic Aviation: Service Problems and Limited Competition Continue in Some Markets (GAO/T-RCED-98-176, Apr. 23, 1998). Aviation Competition: International Aviation Alliances and the Influence of Airline Marketing Practices (GAO/T-RCED-98-131, Mar. 19. 1998). Airline Competition: Barriers to Entry Continue in Some Domestic Markets (GAO/T-RCED-98-112, Mar. 5, 1998). Domestic Aviation: Barriers Continue to Limit Competition (GAO/T-RCED-98-32, Oct. 28, 1997). Airline Deregulation: Addressing the Air Service Problems of Some Communities (GAO/T-RCED-97-187, June 25, 1997). International Aviation: Competition Issues in the U.S.-U.K. Market (GAO/T-RCED-97-103, June 4, 1997). Domestic Aviation: Barriers to Entry Continue to Limit Benefits of Airline Deregulation (GAO/T-RCED-97-120, May 13, 1997). Airline Deregulation: Barriers to Entry Continue to Limit Competition in Several Key Domestic Markets (GAO/RCED-97-4, Oct. 18, 1996). Domestic Aviation: Changes in Airfares, Service, and Safety Since Airline Deregulation (GAO/T-RCED-96-126, Apr. 25, 1996). Airline Deregulation: Changes in Airfares, Service, and Safety at Small, Medium-Sized, and Large Communities (GAO/RCED-96-79, Apr. 19, 1996). International Aviation: Airline Alliances Produce Benefits, but Effect on Competition Is Uncertain (GAO/RCED-95-99, Apr. 6, 1995). Airline Competition: Higher Fares and Less Competition Continue at Concentrated Airports (GAO/RCED-93-171, July 15, 1993). Computer Reservation Systems: Action Needed to Better Monitor the CRS Industry and Eliminate CRS Biases (GAO/RCED-92-130, Mar. 20, 1992). Airline Competition: Effects of Airline Market Concentration and Barriers to Entry on Airfares (GAO/RCED-91-101, Apr. 26, 1991). Airline Deregulation: Trends in Airfares at Airports in Small and Medium-Sized Communities (GAO/RCED-91-13, Nov. 8, 1990). Airline Competition: Industry Operating and Marketing Practices Limit Market Entry (GAO/RCED-90-147, Aug. 29, 1990). Airline Competition: Higher Fares and Reduced Competition at Concentrated Airports (GAO/RCED-90-102, July 11, 1990). Airline Deregulation: Barriers to Competition in the Airline Industry (GAO/T-RCED-89-65, Sept. 20, 1989). Airline Competition: Fare and Service Changes at St. Louis Since the TWA-Ozark Merger (GAO/RCED-88-217BR, Sept. 21, 1988). Competition in the Airline Computerized Reservation Systems (GAO/T-RCED-88-62, Sept. 14, 1988). Airline Competition: Impact of Computerized Reservation Systems (GAO/RCED-86-74, May 9, 1986). Airline Takeoff and Landing Slots: Department of Transportation’s Slot Allocation Rule (GAO/RCED-86-92, Jan. 31, 1986). Deregulation: Increased Competition Is Making Airlines More Efficient and Responsive to Consumers (GAO/RCED-86-26, Nov. 6, 1985). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed and updated its previous work on airfares and service and reexamined the effect that certain barriers have had on these measures, focusing on: (1) how airfares have changed since 1990 for travel to and from 171 airports serving various U.S. communities; (2) how the quality of air service has changed since 1978 for travel to and from these airports; and (3) the extent to which certain barriers to entry--restrictive gate-leasing arrangements, controls on the number of allowable takeoffs and landings at some airports, and the limits on the distance that flights from some airports can be--influence competition at affected airports. GAO noted that: (1) overall, average airfares declined about 21 percent in constant dollars from 1990 to the second quarter of 1998; (2) not all airports realized a similar decreases in airfares; (3) airports serving medium-large communities had the greatest average decrease in fares, and airports serving small communities had the least average decline; (4) average airfares declined at 168 of the 171 airports GAO examined, often with the introduction of competing service from a low-fare carrier; (5) on the other hand, since 1994, average airfares increased for passengers traveling from 39 airports and generally for passengers making short trips to or from airports serving medium-large and large communities; (6) for passengers flying to or from airports in communities of similar size on trips of similar distances in 1998, one passenger travelling from one airport may have paid almost 3 times as much as a passenger travelling from a different airport; (7) while GAO identified such differences in fares, it should be noted that in developing this report, GAO was unable to account for all factors that may have contributed to them, such as the presence of low-cost competition on particular routes or the extent to which travel on routes tended to reflect generally lower-fare leisure travel or more costly business traffic; (8) the overall quality of air service has improved for airports serving large and medium-large communities, but indicators are mixed for airports in small and medium-sized communities; (9) the quantity of the air service available, as measured by the number of departures and available seats has increased for most of the 171 airports GAO reviewed; (10) airports in large and medium-large communities have experienced a substantial increase in the amount of air service; (11) however, some airports have less air service today than they did in 1978, when the industry was deregulated; (12) other indicators of the quality of air service, including those that measure the number of destinations served by nonstop flights and the type of aircraft used, generally show that quality has improved substantially for airports serving large and medium-large communities; (13) for airports serving small and medium-sized communities the results are mixed; (14) at the 10 airports that, in 1996, GAO reported had restrained competition either because of restrictive gate-leasing arrangements or limits on the number of available takeoff and landing times, competition has changed little; and (15) airfares at these 10 airports continue to be consistently higher than airports of comparable size without constraints.
The roles of certain key federal officials initially involved in the advisory board’s review of the dose reconstructions may not have been sufficiently independent and actions were taken to replace these officials. Nonetheless, continued diligence by HHS is required to prevent such problems from recurring as new candidates are considered for these roles. Initially, the project officer assigned responsibility for reviewing the monthly progress reports and monitoring the technical performance of the contractor was also a manager of the NIOSH dose reconstruction program being reviewed. In addition, the designated federal officer for the advisory board, who is responsible for scheduling and attending board meetings, was the director of the dose reconstruction program being reviewed. In response to concerns about the appearance of conflicting roles, the director of NIOSH replaced both of these officials in December 2004 with a senior NIOSH official not involved in the NIOSH program under review. The contractor and members of the board told us that implementation of the contract improved after these replacements were made. With regard to structural independence, we found it appropriate that the contracting officers, who are responsible for managing the contract on behalf of the advisory board, have been federal officials with the Centers for Disease Control and Prevention (CDC), NIOSH’s parent agency. The contracting officers do not have responsibilities for the NIOSH program under review and are not accountable to its managers. Members of the advisory board helped facilitate the independence of the contractor’s work by playing the leading role in developing and approving the initial statement of work for the contractor and the independent cost estimate for the contract. The progress of the contracted review of NIOSH’s site profiles and dose reconstructions has been hindered by the complexity of the work. Specifically, in the first 2 years, the contractor spent almost 90 percent of the $3 million that had been allocated to the contract for a 5-year undertaking. Various adjustments have been made in the review approach in light of the identified complexities, which were not initially understood. However, further improvements could be made in the oversight and planning of the review process. First, the contractor’s expenditure levels were not adequately monitored by the agency in the initial months and the contractor’s monthly progress reports did not provide sufficient details on the level of work completed compared to funds expended. The monthly report for each individual task order was subsequently revised to provide more details but developing more integrated data across the various tasks could further improve the board’s ability to track the progress of the overall review. Second, while the advisory board has made mid-course adjustments to the contractor’s task orders and review procedures, the board has not comprehensively reexamined its long-term plan for the overall project. The board revised the task orders for the contractor several times, in part to reflect adjustments made as the board gained a deeper understanding of the needs of the project. Nonetheless, the board has not reexamined its original plan for the total number of site profile and dose reconstruction reviews needed, and the time frames and funding levels for completing them. Third, there is still a gap with regard to management controls for the resolution of the findings and recommendations that emerge from SC&A’s review. The advisory board developed a six-step resolution process to help resolve technical issues between the contractor and NIOSH, and this process uses matrices to track the findings and recommendations of the contractor and advisory board. However, NIOSH currently lacks a system for documenting that changes it agrees to make as part of this resolution process are implemented. With regard to reviewing special exposure cohort petitions, the advisory board has asked for and received the contractor’s assistance, expanded its charge, and acknowledged the need for the board to review the petitions in a timely manner. The board has reviewed eight petitions as of October 2005, and the contractor assisted with six of these by reviewing the site profiles associated with the facilities. The contractor will play an expanded role by reviewing some of the other submitted petitions and NIOSH’s evaluation of those petitions and recommending to the advisory board whether the petitioning group should be added to the special exposure cohort. The contractor will also develop procedures for the advisory board to use when reviewing petitions. While NIOSH is generally required by law to complete its review of a petition within 180 days of determining that the petition has met certain initial qualification requirements, the advisory board has no specified deadline for its review of petitions. However, the board has discussed the fact that special exposure cohort petition reviews have required more time and effort than originally estimated and that the advisory board needs to manage its workload in order to reach timely decisions. Credibility is essential to the work of the advisory board and the contractor, and actions were taken in response to initial concerns about the independence of federal officials in certain key roles. Nonetheless, it is important for HHS to continue to be diligent in avoiding actual or perceived conflicts of roles as new candidates are considered for these roles over the life of the advisory board. The advisory board’s review of site profiles and dose reconstructions has presented a steep learning curve for the various parties involved. These experiences have prompted the board to make various adjustments to the contractor’s work that are intended to better meet the needs of the review, such as the establishment of a formal six-step resolution process that increases transparency. Nonetheless, further improvements could be made to the oversight and planning of the contracted review. Even though the advisory board has made numerous midcourse adjustments to the work of the contractor, the board has not comprehensively reexamined its long- term plan for the project to determine whether the plan needs to be modified in light of the knowledge gained over the past few years. In addition, while the contractor’s monthly reports were modified to provide more detailed expenditure data, the lack of integrated and comprehensive data across the various tasks makes it more difficult for the advisory board to track the progress of the overall review or make adjustments to funding or deliverables across tasks. Finally, without a system to track the actions taken by NIOSH in response to the findings and recommendations of the advisory board and contractor, there is no assurance that any needed improvements are being made. We are making three recommendations to the Secretary of HHS. To assist the advisory board meet its statutory responsibilities, we recommend that the Secretary of HHS (1) direct the contracting and project officers to develop and share with the advisory board more integrated and comprehensive data on contractor spending levels compared to work completed and (2) consider the need for providing HHS staff to collect and analyze pertinent information that would help the advisory board comprehensively reexamine its long-term plan for assessing the NIOSH site profiles and dose reconstructions. To ensure that the findings and recommendations of the advisory board and the contractor are promptly resolved, we recommend that the Secretary of HHS direct the Director of NIOSH to establish a system to track the actions taken by the agency in response to these findings and recommendations and update the advisory board periodically on the status of such actions. We provided a draft of this report to HHS, the contractor, and all the members of the advisory board for comment. We received comments from HHS, the contractor, and four individual members of the advisory board. The comments from the four members of the board represent the views of these individuals and not an official position of the advisory board. HHS agreed with GAO’s recommendations to provide more integrated and comprehensive data to the advisory board and said that it will consider the need to provide staff to help the advisory board reexamine its overall plan for assessing NIOSH site profiles and dose reconstructions. With regard to the third recommendation, HHS stated that a system is already in place to track actions taken by the agency in response to advisory board recommendations in letters from the board to the Secretary of HHS. HHS added that matrices used in conjunction with the six-step resolution process outline the contractor’s concerns, NIOSH’s response, and the actions to be taken. However, we believe that these matrices do not provide sufficient closure with regard to tracking the actions NIOSH has actually implemented in response to advisory board and contractor findings and recommendations. For example, in some of the matrices, the advisory board has made numerous recommendations that NIOSH perform certain actions to resolve various issues, but there is no system in place to provide assurance that these actions have in fact been taken. Thus, we continue to see a need for this recommendation. Some individual advisory board members who provided comments expressed concerns about our recommendations, although differing in their reasons. One individual board member expressed concern about the recommendations to provide more integrated and comprehensive data to the advisory board or to provide staff to help in reexamining the overall review plan, suggesting that these changes might not be very helpful. We still believe that these recommendations are necessary to ensure that the advisory board has more complete information to better oversee the review as well as a long-term plan for completing the review; hence we did not revise the recommendation. Another individual board member suggested that a system be established to track the advisory board’s recommendations rather than the contractor’s recommendations since it is these that should be of greater concern. While GAO believes it is important to track the resolution of the board’s recommendations, it also important to track the resolution of the contractor’s recommendations, and we therefore revised the wording of our recommendation to reflect this position. HHS, the contractor, and individual advisory board members took issue with statements in the report about the contractor being over budget and behind schedule. While they agreed with GAO’s assessment that the review process got off to a slow start, they thought that the report did not provide sufficient information about the various factors that complicated or led to an expansion of work for the contractor, the revisions to the contractor’s task orders, and the performance of the contractor with respect to the revised task orders. For example, commenters pointed out that in some instances, the contractor had to review a site profile more than once after NIOSH had revised the site profile to include additional information. Commenters added that the contractor’s work also had to shift to accommodate changing priorities. For instance, NIOSH’s increased reliance on using the site profiles to complete dose reconstructions prompted a shift in contractor priorities to devote more time and resources to site profile reviews than originally anticipated. The commenters added that since the task orders were revised, the contractor has been meeting the time frames and budgets specified in the task orders. We therefore revised the report to incorporate additional information on factors that complicated or led to an expansion in the work of the contractor, the revisions that were made to the task orders, and the contractor’s progress in meeting the terms of the revised task orders. HHS, the contractor, and some of the individual members of the advisory board maintained that the advisory board has taken actions to reexamine and adjust its strategy for reviewing site profiles and dose reconstruction cases. For instance, HHS stated that during the advisory board’s meetings in 2005, the board regularly discussed the future of contract activities and altered the review schedule and scope of work several times. For example, the contractor was asked to perform site profile reviews for sites not originally anticipated in order to facilitate the advisory board’s review of related special exposure cohort petitions. Other commenters pointed out the board’s development of a six-step resolution process for use by NIOSH and the contractor to resolve differences on technical issues. We revised the report to more fully reflect actions taken by the advisory board to reexamine and adjust its strategy for the review. We also changed the report title to reflect changes made in the report in this regard. However, we continue to see a need for the advisory board to build on its actions by comprehensively reexamining whether its original long-term plan for the overall project is still appropriate. Several individual advisory board members commented that they remain concerned about the independence of the board and its contractor. Although acknowledging that replacement of the original officials appointed as the designated federal officer and project officer has helped reduce possible challenges to independence, the members pointed out that NIOSH officials remain involved in managing the contract and could still potentially influence the work of the contractor. These individual board members also emphasized that the board has no independent budgetary authority and that it relies on NIOSH to obtain funding. Our review suggests that the contractor has been able to demonstrate its independence during the review. For instance, our report notes that the contractor’s reports have criticized numerous aspects of NIOSH site profiles and dose reconstructions. Further, contractor officials told us that they believe relations with NIOSH are thoroughly professional and board members told us that they are satisfied with the information provided by the contractor. We acknowledge that the potential for impairment of the contractor’s efforts remains. In fact, our draft report concluded that there is a need for continued diligence in avoiding actual or perceived conflicts of roles as new candidates are considered for certain positions over the life of the advisory board. We have further highlighted this point in the final report. HHS’s comments are provided in appendix II, and the contractor’s comments are provided in appendix III. HHS, the contractor, and individual board members also provided technical comments, which we have incorporated as appropriate. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the issue date. At that time, we will send copies of this report to the Secretary of Health and Human Services, interested congressional committees, and other interested parties. We are also sending copies to the Chair and members of the advisory board. We will make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512- 7215. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made major contributions to this report are listed in appendix III. predecessor agencies and contractors have employed thousands of individuals in secret and dangerous work in the atomic weapons industry. The Energy Employees Occupational Illness Compensation Program Act (EEOICPA) of 2000 compensates individuals who have developed cancer or other specified diseases related to on-the-job exposure to radiation and other hazards at these work sites. Under Subtitle B, determining a claimant’s eligibility for compensation involves developing estimates of the likely radiation levels a worker was exposed to based on information such as exposure records. These estimates are referred to as “dose reconstructions” and are developed by the National Institute for Occupational Safety and Health (NIOSH) under the Department of Health and Human Services (HHS). NIOSH also compiles information in “site profiles” about the radiation protection practices and hazardous materials used at various plants and facilities, which assist NIOSH in completing the dose reconstructions. Employees at certain facilities were designated under the law as members of a “special exposure cohort” because it was believed that exposure records were insufficient and the reasonable likelihood was that the workers’ radiation exposure caused their cancers. Their claims are paid without completing exposure estimates. The law also allows the Secretary, HHS, to designate additional groups of employees as members of the special exposure cohort. claims process, EEOICPA created a citizen’s advisory board of scientists, physicians, and employee representatives—the President’s Advisory Board on Radiation and Worker Health (advisory board). Members of board serve part-time, and the board has limited staff support. The advisory board is tasked with reviewing the scientific validity and quality of NIOSH’s dose reconstructions and advising the Secretary of HHS. The board has the flexibility to determine the scope and methodology for this review. In addition, the advisory board is tasked with reviewing NIOSH’s evaluation of petitions for special exposure cohort status and recommending whether such status should be granted. To facilitate the advisory board’s review, HHS awarded a 5-year, $3-million contract to Sanford Cohen & Associates (SC&A) in October 2003 to examine a sample of dose reconstructions and particular site profiles and to perform a variety of other tasks. assist NIOSH in developing site profiles and in performing dose reconstructions. Originally, about $70 million was allocated to the contract, but this figure had increased to over $200 million by 2004. We focused our work on three questions: Are the roles of key federal officials involved in the review of NIOSH’s dose reconstructions sufficiently independent to assure the objectivity of the review? Have the agency’s management controls and the advisory board’s oversight been sufficient to ensure that the contract to review site profiles and dose reconstructions is adequately carried out? Is the advisory board using the contractor’s expertise in reviewing special exposure cohort petitions? key officials and interviewed these officials to document their roles. We used the broad principles specified in various criteria, including those specified in the Federal Acquisition Regulation and Government Auditing Standards, to assess the independence of key officials’ roles. We analyzed the contract provisions, including the specific task orders and monthly progress reports as well as the actions taken by officials to manage the contract. We assessed whether the management controls were adequate, considering criteria such as the Federal Acquisition Regulation. meeting minutes as well as interviewed key officials and attended advisory board meetings, to determine the process the advisory board has used and plans to use to evaluate petitions. The scope of our work did not include examining the contract NIOSH awarded to Oak Ridge Associated Universities. We conducted this review from March 2005 through November 2005 in accordance with generally accepted government auditing standards. The roles of certain key federal officials initially involved in the review of dose reconstructions may not have been sufficiently independent and these officials were replaced. However, continued diligence by HHS is required to prevent such problems from recurring as new candidates are considered for these roles. The progress of the contracted review of site profiles and dose reconstructions has been hindered, largely by the complexity of the work. Some adjustments have been made, but further improvements could be made to the oversight and planning of the review. The advisory board is using the contractor’s work in reviewing special exposure cohort petitions and has acknowledged the need to review the petitions in a timely manner. (HHS) (CDC) National Institute for Occupational Safety and Health (PGO) Designated federal officer (since 12/04) Project officer (since 12/04) Office of Compensation Analysis and Support (OCAS) Project officer (prior to 12/04) (SC&A) delegated by the President, and is tasked by executive order with providing administrative services, funds, facilities, staff, and other necessary support services to assist the advisory board in carrying out its responsibilities. CDC NIOSH’s parent agency awarded the contract on behalf of the advisory board. A CDC Procurement and Grants Office (PGO) official serves as the contracting officer. The contracting officer is responsible for administering and providing management of the contract on the advisory board’s behalf. This includes reviewing the monthly progress reports and paying the contractor for its approved costs. for preparing the site profiles and completing the dose reconstructions. NIOSH officials serve as the project officer for the contract and the designated federal officer for the advisory board. The project officer is responsible for reviewing the monthly progress reports and monitoring the technical performance of the contractor. The designated federal officer schedules and attends meetings of the advisory board. reconstructions and (2) review NIOSH’s evaluation of special exposure cohort petitions and recommend whether such status should be granted. Operates under Federal Advisory Committee Act (FACA) requirements such as conducting its meetings in public. Contractor - SC&A Under contract, assists the advisory board in meeting its statutory responsibilities by reviewing a sample of dose reconstructions and their associated site profiles and providing assistance with special exposure cohort petitions. Provides monthly progress reviews to the contracting officer, project officer, and advisory board. Task 1: Review selected NIOSH-developed site profiles. Task 2: Develop automated system to track NIOSH dose reconstruction cases. Task 3: Review NIOSH dose reconstruction procedures. Task 4: Review a sample of NIOSH dose reconstruction cases. Review NIOSH’s special exposure cohort petition procedures and individual petitions. Provide administrative (logistical) support to advisory board (monthly progress reports, attendance at advisory board meetings, etc.). The contracting officer is a CDC employee whose organization is independent of the NIOSH program under review. In 2003-2004, the project officer also served as a NIOSH program manager of the program under review. In December, 2004, a senior NIOSH official, who does not have responsibilities for the program under review, took over this role. In 2002-2004, the designated federal officer also served as the NIOSH director of the program under review. In December 2004, a senior NIOSH official, who does not have responsibilities for the program, took over this role. approving the initial statement of work for the contractor and the independent government cost estimate for the contract, actions which helped facilitate the independence of the contractor’s work. overall project performance in the initial months. More detailed expenditure data were subsequently provided to facilitate monitoring but developing more comprehensive data would be useful. While the advisory board has made various adjustments to the contractor’s task orders and work processes after the contractor encountered initial difficulties, the board has not comprehensively reexamined its long-term plan for the project. Additionally, NIOSH lacks a process for documenting actions it has taken in response to the contractor’s findings that are reported to the advisory board and the advisory board’s recommendations to HHS. The contractor’s expenditure levels were not adequately monitored in the initial months. Although the contractor’s reports indicated that costs were higher than anticipated, the project officer was caught by surprise in October 2004 when the contractor announced a need for work stoppage because expenditures on a specific task order had approached budget ceilings. The contracting officer noted that during this period the contractor’s reports did not reflect the actual percent of work completed, making it very difficult to identify the actual cost of performance. Work was suspended on the site profile review task and a smaller task for several days in November until additional funds were authorized. Separate monthly progress reports are submitted for each task order. However, there is no single comprehensive report on overall contract performance, which could facilitate tracking the progress of the overall review or making strategic adjustments where needed. Initial task orders called for the contractor to complete: 12 to 16 site profile reviews by February 2005 for $426,000 60 dose reconstruction reviews by August 2004 for $467,000 These tasks cost more or took longer to complete than originally estimated. At the end of January 2005, the contractor had completed 2 site profile reviews and partially completed 2 others while spending $481,000. The contractor completed the first 60 dose reconstruction reviews by September 2005 while spending about $1.0 million. (According to SC&A, the cost increase consisted of costs related to overall contract management, not to increased dose reconstruction review costs.) Overall, in the first 2 years, the contractor spent almost 90 percent of the $3 million allocated for a 5-year undertaking. anticipated. Both the contractor and NIOSH officials involved in the review reported that reviews of site profiles and dose reconstructions have proven considerably more complex than originally anticipated; thus the original cost estimates for the project (based on very limited information and experience) were not realistic. Contractor encountered initial delays in obtaining information. The contractor's progress was initially hindered by substantial delays it encountered in obtaining necessary security clearances and access from NIOSH to various technical documents. These early implementation issues have generally been resolved, according to the contractor. Contractor has met these revised task order requirements. to NIOSH’s increased reliance on site profiles. Site profiles were originally seen as one of numerous resources to be used in developing dose reconstructions. However, as site profiles became the primary resource used by NIOSH, the advisory board wanted assurance that these site profiles were credible. NIOSH revisions to site profiles required the contractor to complete multiple reviews in some instances. For example, the contractor completed four reviews of the Mallinckrodt site profile as a result of NIOSH’s changes. NIOSH views the site profiles as “living documents” that can be added to as new information is identified or changes need to be made. In addition, as NIOSH worked to complete many of the site profiles within an 18-month time frame, many “loose ends” remained in the site profiles, according to the contractor. and contractor to resolve their differences of views on technical issues. This process expanded the time and resources needed for reviews. Unanticipated site profile reviews (e.g., Iowa Army Ammunition Plant) were needed to facilitate the advisory board’s review of special exposure cohort petitions. use by, or in support of, the advisory board. The advisory board has authorized a new set of contractor reviews for fiscal year 2006. An additional 6 site profile reviews, 60 dose reconstruction case reviews, and 6 special exposure cohort petition reviews. In August 2005, the designated federal officer pointed out that at the current rate of progress, the original plan to review a total of 600 dose reconstructions would require about 10 years to complete. But the advisory board has not comprehensively reexamined its original long-term plan for the project to determine if it needs to be modified. Total number of site profile reviews needed? Total number of dose reconstruction case reviews needed? Time frames for completion and funding levels required? profiles and dose reconstructions, such as NIOSH’s failure to consider information provided by site experts in its site profiles and certain assumptions NIOSH used to calculate dose reconstructions. As part of the six-step resolution process, the contractor and NIOSH develop matrices that specify NIOSH’s response and any planned actions for each of the contractor’s findings and recommendations. In some matrices, space is provided for the board to recommend that NIOSH take certain actions to resolve issues. However, there is no system in place to track NIOSH’s implementation of these actions or advisory board recommendations. Procedures for prompt resolution and implementation of audit findings and other reviews should be part of all federal agencies’ internal controls. exposure cohort petitions. A recent task order expands the contractor’s role for this facet of the board’s work. A potentially large increase in the board’s petition review workload did not occur because many petitions did not meet initial qualification requirements. The advisory board has acknowledged the need to review the petitions in a timely manner. For six of these petitions, the contractor reviewed the site profiles (though not the actual petitions associated with the named facilities). For the other two petitions, the advisory board did not request the contractor’s assistance. evaluations of these petitions to recommend to the advisory board whether the petitioning group should be added to the special exposure cohort. The contractor will also develop the procedures for the advisory board to use when reviewing petitions. thus did not need to be reviewed by the board. As of October 2005, NIOSH had determined that 18 of the submitted petitions did not meet the qualification requirements. filed as of October 2005: One petition is ready for the advisory board to review. NIOSH is completing its evaluation of four more petitions that will be sent to the board for review. NIOSH is assessing three other petitions to determine if they meet the qualification requirements. for evaluation is unknown. While NIOSH is generally required by law to complete its review of a petition within 180 days of the petition’s being qualified, there is no specified time frame for the advisory board’s review of petitions. Nonetheless, the advisory board has discussed the fact that special exposure cohort petition reviews have required more time and effort to reach a recommended decision than originally estimated and that the advisory board needs to manage its workload in order to reach timely decisions. performing key roles, actions were taken to replace these officials. Credibility is essential to the work of the advisory board and the contractor. Thus, it is important to continue to be diligent in avoiding actual or perceived conflicts of roles as new candidates are considered for certain positions over the life of the advisory board. Management and Oversight of the Review of Site Profiles and Dose The advisory board’s review has presented a steep learning curve for the various parties involved. Despite some adjustments, further improvements could be made: reassessing the long-term plan for the project integrating data on contractor expenditures tracking resolution of board and contractor findings and recommendations. Reassessing the long-term plan for the project The advisory board has made numerous midcourse adjustments to the work of the contractor as operations have matured. It would thus be appropriate for the advisory board to comprehensively reexamine its long-term plan for the overall project to determine whether this plan needs to be modified. Integrated data on contractor expenditures Contractor’s monthly reports were modified to provide more detailed data for individual tasks on expenditures compared to work completed. However, the lack of integrated and comprehensive data across the various tasks makes it more difficult for the advisory board to track the progress of the overall review or make strategic adjustments to funding or deliverables across tasks. Tracking resolution of findings and recommendations The advisory board developed a six-step resolution process that uses matrices to track the findings and recommendations of the contractor and board. However, without a system for documenting the actions NIOSH has taken in response, there is no assurance that any needed improvements are being made. direct the contracting and project officers to develop and share with the advisory board more integrated and comprehensive data on contractor’s spending levels compared to work completed and consider the need for providing HHS staff to collect and analyze pertinent information that would help the advisory board comprehensively reexamine its long-term plan for assessing the NIOSH site profiles and dose reconstructions. direct the Director of NIOSH to establish a system to track the actions taken by the agency in response to these findings and recommendations and update the advisory board periodically on the status of such actions. Processing, Program Structure May Result in Inconsistent Benefit Outcomes. GAO- 04-516. Washington, DC: May 28, 2004. Energy Employees Compensation: Many Claims Have Been Processed, but Action Is Needed to Expedite Processing of Claims Requiring Radiation Exposure Estimates. GAO-04-958. Washington, DC: Sept.10, 2004. Andy Sherrill, Assistant Director; Margaret Armen, Richard Burkard, Susan Bernstein, Sandra Chefitz, Mary Nugent, and Robert Sampson made significant contributions to this report. Energy Employees Compensation: Many Claims Have Been Processed, but Action Is Needed to Expedite Processing of Claims Requiring Radiation Exposure Estimates. GAO-04-958. Washington, D.C.: Sept. 10, 2004. Energy Employees Compensation: Even with Needed Improvements in Case Processing, Program Structure May Result in Inconsistent Benefit Outcomes. GAO-04-516. Washington, D.C.: May 28, 2004.
For the last several decades, the Department of Energy and its predecessor agencies and contractors have employed thousands of individuals in secret and dangerous work in the atomic weapons industry. In 2000, Congress enacted the Energy Employees Occupational Illness Compensation Program Act to compensate those individuals who have developed cancer or other specified diseases related to on-the-job exposure to radiation and other hazards at these work sites. Under Subtitle B, determining the eligibility of claimants for compensation is a complex process, involving several federal agencies and a reconstruction of the historical evidence available. The Department of Labor must consider a claimant's case based on records of his or her employment and work activities, which are provided by the Department of Energy. Labor considers the compensability of certain claims by relying on estimates of the likely radiation levels to which particular workers were exposed. These "dose reconstructions" are developed by the National Institute for Occupational Safety and Health (NIOSH) under the Department of Health and Human Services (HHS). NIOSH also compiles information in "site profiles" about the radiation protection practices and hazardous materials used at various plants and facilities, which helps complete the dose reconstructions. Because certain facilities are known to have exposed employees to radiation while keeping few records of individuals' exposure, their employees have been designated under the law as members of a "special exposure cohort," and their claims may be paid without individual dose reconstructions. The law also allows the Secretary of HHS to add additional groups of employees to the special exposure cohort. For quality control and to raise public confidence in the fairness of the claims process, the compensation act also created a citizen's advisory board of scientists, physicians, and employee representatives--the President's Advisory Board on Radiation and Worker Health. Members of the board serve part-time and the board has limited staff support. The advisory board is tasked to review the scientific validity and quality of NIOSH's dose reconstructions and advise the Secretary of HHS. The board has the flexibility to determine the scope and methodology for this review. We assessed how well the advisory board's review and the contracted work with SC&A are proceeding. We focused on three questions: (1) Are the roles of key federal officials involved in the review of NIOSH's dose reconstructions sufficiently independent to assure the objectivity of the review? (2) Have the agency's management controls and the advisory board's oversight been sufficient to ensure that the contract to review site profiles and dose reconstructions is adequately carried out? and (3) Is the advisory board using the contractor's expertise in reviewing special exposure cohort petitions? The roles of certain key federal officials initially involved in the advisory board's review of the dose reconstructions may not have been sufficiently independent and actions were taken to replace these officials. Nonetheless, continued diligence by HHS is required to prevent such problems from recurring as new candidates are considered for these roles. Initially, the project officer assigned responsibility for reviewing the monthly progress reports and monitoring the technical performance of the contractor was also a manager of the NIOSH dose reconstruction program being reviewed. In addition, the designated federal officer for the advisory board, who is responsible for scheduling and attending board meetings, was the director of the dose reconstruction program being reviewed. The progress of the contracted review of NIOSH's site profiles and dose reconstructions has been hindered by the complexity of the work. Specifically, in the first 2 years, the contractor spent almost 90 percent of the $3 million that had been allocated to the contract for a 5-year undertaking. Various adjustments have been made in the review approach in light of the identified complexities, which were not initially understood. However, further improvements could be made in the oversight and planning of the review process. With regard to reviewing special exposure cohort petitions, the advisory board has asked for and received the contractor's assistance, expanded its charge, and acknowledged the need for the board to review the petitions in a timely manner. The board has reviewed eight petitions as of October 2005, and the contractor assisted with six of these by reviewing the site profiles associated with the facilities. The contractor will play an expanded role by reviewing some of the other submitted petitions and NIOSH's evaluation of those petitions and recommending to the advisory board whether the petitioning group should be added to the special exposure cohort. The contractor will also develop procedures for the advisory board to use when reviewing petitions. While NIOSH is generally required by law to complete its review of a petition within 180 days of determining that the petition has met certain initial qualification requirements, the advisory board has no specified deadline for its review of petitions. However, the board has discussed the fact that special exposure cohort petition reviews have required more time and effort than originally estimated and that the advisory board needs to manage its workload in order to reach timely decisions.
Information security is a critical consideration for any organization that depends on information systems and computer networks to carry out its mission or business and is especially important for government agencies, where maintaining the public’s trust is essential. While the dramatic expansion in computer interconnectivity and the rapid increase in the use of the Internet have enabled agencies such as SEC to better accomplish their missions and provide information to the public, agencies’ reliance on this technology also exposes federal networks and systems to various threats. This can include threats originating from foreign nation states, domestic criminals, hackers, and disgruntled employees. Concerns about these threats are well founded because of the dramatic increase in reports of security incidents, the ease of obtaining and using hacking tools, and advances in the sophistication and effectiveness of attack technology, among other reasons. Without proper safeguards, systems are vulnerable to individuals and groups with malicious intent who can intrude and use their access to obtain or manipulate sensitive information, commit fraud, disrupt operations, or launch attacks against other computer systems and networks. We and federal inspectors general have reported on persistent information security weaknesses that place federal agencies at risk of destruction, fraud, or inappropriate disclosure of sensitive information. Accordingly, since 1997, we have designated federal information security as a government-wide high-risk area, and in 2003 expanded this area to include computerized systems supporting the nation’s critical infrastructure. Most recently, in the February 2015 update to our high-risk list, we further expanded this area to include protecting the privacy of personally identifiable information (PII)—that is, personal information that is collected, maintained, and shared by both federal and nonfederal entities. The Federal Information Security Modernization Act (FISMA) of 2014 is intended to provide a comprehensive framework for ensuring the effectiveness of information security controls over information resources that support federal operations and assets. FISMA requires each agency to develop, document, and implement an agency-wide security program to provide security for the information and systems that support the operations and assets of the agency, including information and information systems provided or managed by another agency, contractor, or other source. Additionally, FISMA assigns responsibility to the National Institute of Standards and Technology (NIST) to provide standards and guidelines to agencies on information security. NIST has issued related standards and guidelines, including Recommended Security Controls for Federal Information Systems and Organizations, NIST Special Publication (NIST SP) 800-53, and Contingency Planning Guide for Federal Information Systems, NIST SP 800-34. To support its financial operations and store the sensitive information it collects, SEC relies extensively on computerized systems interconnected by local and wide-area networks. For example, to process and track financial transactions, such as filing fees paid by corporations or disgorgements and penalties from enforcement activities, and for financial reporting, SEC relies on numerous enterprise applications, including the following: Various modules in Delphi-Prism, a federal financial management system provided by the Department of Transportation’s Federal Aviation Administration’s Enterprise Service Center, are used by SEC for financial accounting, analyses, and reporting. Delphi-Prism produces SEC’s financial statements. The Electronic Data Gathering, Analysis, and Retrieval (EDGAR) system performs the automated collection, validation, indexing, acceptance, and forwarding of submissions by companies and others that are required to file certain information with SEC. Its purpose is to accelerate the receipt, acceptance, dissemination, and analysis of time-sensitive corporate information filed with the commission. EDGAR/Fee Momentum, a subsystem of EDGAR, maintains accounting information pertaining to fees received from registrants. End User Computing Spreadsheets and/or User Developed Applications are used by SEC to prepare, analyze, summarize, and report on its financial data. FedInvest invests funds related to disgorgements and penalties. Federal Personnel and Payroll System/Quicktime processes personnel and payroll transactions. The SEC’s general support system provides (1) business application services to internal and external customers and (2) security services necessary to support these applications. Under FISMA, the SEC Chairman has responsibility for, among other things, (1) providing information security protections commensurate with the risk and magnitude of harm resulting from unauthorized access, use, disclosure, disruption, modification, or destruction of the agency’s information systems and information; (2) ensuring that senior agency officials provide information security for the information and information systems that support the operations and assets under their control; and (3) delegating to the agency chief information officer (CIO) the authority to ensure compliance with the requirements imposed on the agency. FISMA further requires the CIO to designate a senior agency information security officer who will carry out the CIO’s information security responsibilities. SEC had implemented and made progress in strengthening information security controls, including implementing access controls, deploying multiple firewalls, establishing monitoring and logging capabilities, and resolving five weaknesses that we had previously identified. However, weaknesses limited the effectiveness of other controls in protecting the confidentiality, integrity, and availability of SEC’s information systems. Specifically, SEC did not consistently control logical and physical access to its network, servers, applications, and databases; manage its configuration settings; segregate duties; or update its contingency plan. These weaknesses existed, in part, because SEC did not effectively implement key elements of its information security program, including keeping up-to-date policies and procedures, completely documenting plans of actions and milestones (POA&M) for control weakness remediation, establishing and maintaining configuration settings, and monitoring configuration settings for compliance with standards. Consequently, SEC’s financial information and systems were exposed to increased risk of unauthorized disclosure, modification, and destruction. A basic management objective for any organization is to protect the resources that support its critical operations and assets from unauthorized access. Organizations accomplish this by designing and implementing controls that are intended to prevent, limit, and detect unauthorized access to computer resources (e.g., data, programs, equipment, and facilities), thereby protecting them from unauthorized disclosure, modification, and destruction. Specific access controls include boundary protection, identification and authentication of users, authorization restrictions, audit and monitoring capability, configuration management, separation of duties, and physical security. Without adequate access controls, unauthorized individuals, including intruders and former employees, can surreptitiously read and copy sensitive data and make undetected changes or deletions for malicious purposes or for personal gain. In addition, authorized users could intentionally or unintentionally modify or delete data or execute changes that are outside of their span of authority. Although SEC had issued policies and implemented controls based on those policies, it did not consistently protect its network from possible intrusions, identify and authenticate users, authorize access to resources, audit and monitor actions taken on its systems and network, and restrict physical access to sensitive assets. Boundary protection controls (1) logical connectivity into and out of networks and (2) connectivity to and from network-connected devices. Implementing multiple layers of security to protect an information system’s internal and external boundaries provides defense-in-depth. By using a defense-in-depth strategy, entities can reduce the risk of a successful cyber attack. For example, multiple firewalls could be deployed to prevent both outsiders and trusted insiders from gaining unauthorized access to systems. At the host or device level, logical boundaries can be controlled through inbound and outbound filtering provided by access control lists and personal firewalls. At the system level, any connections to the Internet, or to other external and internal networks or information systems, should occur through controlled interfaces. To be effective, remote access controls should be properly implemented in accordance with authorizations that have been granted. SEC deployed multiple firewalls that were intended to prevent unauthorized access to its systems; however, it did not always restrict traffic passing through its firewalls. For example, SEC did not always configure access control lists to restrict potentially insecure traffic or ports on each of the six internal firewalls reviewed, subjecting the hosts to potentially vulnerable services. Also, SEC did not apply host firewall configuration rules on three of four hosts. As a result of these inadequate configurations, SEC introduced vulnerability to potentially unnecessary and undetectable access at multiple points in its network environment. Information systems need to be managed to effectively control user accounts and identify and authenticate users. Users and devices should be appropriately identified and authenticated through the implementation of adequate logical access controls. Users can be authenticated using mechanisms such as a password and smart card combination. SEC policy requires enforcement of minimum password complexity and password expiration. In addition, SEC policy requires that multifactor authentication be implemented for network and local access to privileged and non-privileged accounts. However, SEC did not fully implement controls for identifying and authenticating users. For example, it did not always enforce individual accountability, as 20 different users used the same password on multiple servers in the production, development and testing environments. Also, SEC configured the password for a key financial server to never expire. Additionally, while SEC implemented multifactor authentication for remote access, it did not require multifactor authentication for network or console access managed by the agency’s security group. As a result, SEC is at an increased risk that accounts could be compromised and used by unauthorized individuals to access sensitive financial data. Authorization encompasses access privileges granted to a user, program, or process. It involves allowing or preventing actions by that user based on predefined rules. Authorization includes the principles of legitimate use and least privilege. Access rights and privileges are used to implement security policies that determine what a user can do after being allowed into the system. Maintaining access rights, permissions, and privileges is one of the most important aspects of administering system security. SEC policy states that information system owners shall explicitly authorize access to configuration settings, file permissions, and privileges. SEC policy also states that information systems must prevent non‐privileged users from executing privileged functions, including disabling, circumventing, or altering implemented security safeguards or countermeasures. However, SEC did not always ensure that only authorized individuals were granted access to its systems. For example, it did not promptly remove 9 of 66 expired administrator accounts that we reviewed. In addition, SEC did not appropriately set configuration settings, file permissions, and privileged access to sensitive files, such as allowing group membership not explicitly authorized to access these files. As a result, users had excessive levels of access that were not required to perform their jobs. This could lead to unauthorized users who had penetrated SEC networks inadvertently or deliberately modifying financial data or other sensitive information. Audit and monitoring involves the regular collection, review, and analysis of auditable events for indications of inappropriate or unusual activity, and the appropriate investigation and reporting of such activity. Automated mechanisms may be used to integrate audit monitoring, analysis, and reporting into an overall process for investigating and responding to suspicious activities. Audit and monitoring controls can help security professionals routinely assess computer security, perform investigations during and after an attack, and recognize an ongoing attack. Audit and monitoring technologies include network- and host-based intrusion detection systems, audit logging, security event correlation tools, and computer forensics. SEC policy states that appropriate audit logs shall be generated at all times for SEC information systems, depending on the security categorization of the system and the level of risk associated with the loss, compromise, or unauthorized disclosure of the data processed or transmitted by the system. However, SEC did not consistently enable audit log configuration settings to capture key security activities on its server hosts reviewed. For example, audit logs for policy settings were not set to be the same for the four server hosts reviewed. As a result, SEC was not able to monitor key activities on some of the server hosts and thus may not be able to detect or investigate unauthorized system activity. Physical security controls restrict physical access to computer resources and protect them from intentional or unintentional loss or impairment. Adequate physical security controls over computer facilities and resources should be established that are commensurate with the risks of physical damage or access. Physical security controls over the overall facility and areas housing sensitive information technology components include, among other things, policies and practices for granting and discontinuing access authorizations; controlling badges, ID cards, smartcards, and other entry devices; controlling entry during and after normal business hours; and controlling the entry and removal of computer resources (such as equipment and storage media) from the facility. SEC instituted physical security controls that included badge swipe readers to enter the building or use the elevators, alarm systems that would sound if exterior doors were propped open for extended periods of time, and additional check points to restrict access to areas housing the EDGAR working space. However, the effectiveness of its physical security was reduced by weaknesses identified. For example, SEC’s facilities service provider did not monitor the perimeter of the contingency site on a real-time basis. In addition, SEC did not adequately secure the server storage area at its contingency site. SEC also did not periodically conduct a physical inventory of employee badges. The insufficient physical access control over the commission’s information systems place sensitive information and assets at greater risk from unauthorized access. Configuration management involves the identification and management of security features for all hardware, software, and firmware components of an information system at a given point and systematically controlling changes to that configuration during the system’s life cycle. FISMA requires each federal agency to have policies and procedures that ensure compliance with minimally acceptable system configuration requirements. Systems with secure configurations have less vulnerability and are better able to thwart network attacks. Also, effective configuration management provides reasonable assurance that systems are configured and operating securely and as intended. SEC policy states that the agency should maintain proper system configuration in compliance with official SEC baselines. SEC did not maintain and monitor official configuration baselines for some of the platforms used to host financially significant systems and general support system that we reviewed. Consequently, increased risk exists that systems could be exposed to vulnerabilities that could be exploited by attackers seeking to gain unauthorized access. To reduce the risk of error or fraud, duties and responsibilities for authorizing, processing, recording, and reviewing transactions should be separated to ensure that one individual does not control all critical stages of a process. Effective segregation of duties starts with effective entity- wide policies and procedures that are implemented at the system and application levels. Often, segregation of incompatible duties is achieved by dividing responsibilities among two or more organizational groups, which diminishes the likelihood that errors and wrongful acts will go undetected because the activities of one individual or group will serve as a check on the activities of the other. Inadequate segregation of duties increases the risk that erroneous or fraudulent transactions could be processed, improper program changes implemented, and computer resources damaged or destroyed. SEC policy states that information system owners must separate duties of individuals as necessary to provide appropriate management and security oversight and define information system access authorizations to support the separation of duties. However, SEC did not appropriately separate incompatible access to three computing environments for 20 individuals. SEC assigned multiple user accounts to individuals that gave the individuals access to the production, disaster recovery, and test/development environments. SEC officials stated that they had implemented the principles of separation of duties and accepted the risk for those individuals that required access to multiple environments. However, SEC had not documented management’s acceptance of this risk. Thus, an increased risk exists that unauthorized individuals from the disaster recovery environment and test/development environment could gain access to processes and data in the production environment, potentially impacting the integrity of the financial data. Losing the capability to process, retrieve, and protect electronically maintained information can significantly affect an agency’s ability to accomplish its mission. If contingency and disaster recovery plans are inadequate, even relatively minor interruptions can result in lost or incorrectly processed data, which can cause financial losses, expensive recovery efforts, and inaccurate or incomplete information. Given these severe implications, it is important that an entity have in place (1) up-to- date procedures for protecting information resources and minimizing the risk of unplanned interruption; (2) a plan to recover critical operations should interruptions occur that considers the activities performed at general support facilities, including data processing centers and telecommunication facilities; and (3) redundancy in critical systems. SEC policy states that the agency should provide for the recovery and reconstitution of the information system to a known state after a disruption, compromise, or failure. This includes establishing an alternate processing site that can operate as the network operation center that permits the transfer and resumption of essential business functions within 12 hours when the primary processing capabilities are unavailable. In addition, SEC policy states that the contingency plan should be reviewed at least annually and updated to address (1) changes to the Commission, information system, or environment of operation and (2) problems encountered during contingency plan implementation, execution, or testing. Although SEC had developed contingency and disaster recovery plans and implemented controls for this planning, its plans were not complete or up to date. Specifically, SEC did not maintain a sufficiently prepared alternate network operations center in the event of a disaster. Also, SEC did not consistently review and update contingency planning documents. Consequently, SEC had limited ability to monitor the health of its network in the event of a failure at its primary data center. The information security weaknesses existed in the SEC computing environment, in part, because SEC had not fully implemented key elements of its agency-wide information security program. Specifically, it did not always (1) review and update its policies in a timely manner, (2) completely document plans of actions and milestones items, (3) document its physical inventory, and (4) fully implement and effectively manage its continuous monitoring program. Security control policies and procedures should be documented and approved by management. According to FISMA, each federal agency information security program must include policies and procedures that are based on risk assessments that cost-effectively reduce information security risks to an acceptable level, and ensure that information security is addressed throughout the life cycle of each agency information system. SEC policy states that the agency should review and update policy and procedures annually. SEC did not always review and update its information technology policies and guidance in a timely manner. Specifically, SEC had not reviewed and updated the 10 information technology policies that we reviewed for between 4 and 8 years. In addition, SEC did not review implementing policies for its User Access Program, and one of three of these policies reviewed was dated to 2007. Without appropriate review to ensure up-to- date policies and procedures, increased risk exists that information technology operations would not be in step with current security leading practices or reflect SEC’s current operating environment. When weaknesses are identified, the related risks should be reassessed, appropriate corrective or remediation actions taken, and follow-up monitoring performed to make certain that corrective actions are effective. FISMA specifically requires that agency-wide information security programs include a process for planning, implementing, evaluating, and documenting remedial action to address any deficiencies in the information security policies, procedures, and practices of the agency. SEC policy states that a plan of action and milestones (POA&M) will be developed to plan, track, and manage the remedial actions required to address identified information security deficiencies. POA&Ms are based on the findings from security control assessments, security impact analyses, continuous monitoring activities, and other reported deficiencies, including but not limited to Office of Inspector General and GAO engagements. Further, SEC policy states that, at a minimum, each POA&M must include the following for each information security deficiency: tasks planned to correct the deficiency and to address any residual risk, resources required to accomplish the planned tasks, responsible organizations for implementing the mitigation, any milestones to meet the tasks, scheduled completion dates for each milestone, and the status of corrective action activity. SEC did not completely document POA&M items. While SEC had made progress in documenting POA&Ms in its repository, the following artifacts supporting closure were not adequately documented in 20 of 20 plans reviewed: tasks planned to correct the weakness and to address any residual risk, milestones in meeting the tasks with the scheduled completion dates, and the status of corrective action activity. Without adequate documentation to support POA&M progress, it would be difficult to determine whether the weakness is properly remedied. Configuration management involves the identification and management of security features for all hardware, software, and firmware components of an information system at a given point and systematically controls changes to that configuration during the system’s life cycle. SEC policy states that the agency should develop, document, and maintain a current baseline configuration for information systems and, for moderate risk systems, review and update baseline configurations at least annually due to patches and common vulnerability enumeration announcements, and as an integral part of information system component installations and upgrades. The policy also states that information system owners of the general support system and major applications should be responsible for developing, documenting, and maintaining an inventory of information system components that accurately reflects the current information system, includes all components within the authorization boundary of the information system, maintains sufficient level of granularity for tracking and reporting, includes information deemed necessary to achieve effective property accountability, and reviews and updates the inventory as part of the system security plan update. While SEC had a well-documented and up-to-date system security plan for a key financial system that included accurately identified program changes and version numbers, it did not document a comprehensive physical inventory of the systems and applications in its production environments. Specifically, SEC did not document, for each system or application, purpose, host names, operating system version, database version, and location of the system or application in the inventory. In addition, SEC did not adequately review and update current configuration baseline settings documentation for the operating systems. The baselines documentation was last reviewed and approved by SEC management in fiscal year 2012, including those for the operating systems. Without maintaining an accurate inventory of systems and applications in production and conducting annual review of configuration baselines, SEC may not be able to obtain the current status of its systems and applications and the agency would not be able to identify unauthorized actions performed against the baseline. An important element of risk management is ensuring that policies and controls intended to reduce risk are effective on an ongoing basis. To do this effectively, top management should understand the agency’s security risks and actively support and monitor the effectiveness of its security policies. SEC policy states that the agency shall develop a continuous monitoring strategy and implement a continuous monitoring program that includes establishment of system-dependent monthly automated scans for monitoring and reviews at least every other month, ongoing security control assessments, correlation and analysis of security related information generated by assessments and monitoring. SEC invested in multiple tools with the ability to conduct compliance monitoring for its information systems. However, the agency had not developed a process, including the use of vulnerability scanners, to monitor the configuration of components of a key financial system and evaluate host compliance with SEC policy. For example: While scans were run to detect vulnerabilities on SEC systems identified in databases of common vulnerabilities, resulting reports were not sent to database personnel for them to take appropriate actions. Personnel for a key financial system were not granted access to the database scanning tool. SEC had not instituted processes to review the information produced by the vulnerability scanning tools, including necessary personnel and processes for conducting analysis. Without implementing an effective process for monitoring, evaluating, and remedying identified weaknesses, SEC would not be aware of potential weaknesses that could affect the confidentiality, integrity and availability of its information systems. SEC resolved 5 of the 20 previously reported information security control deficiencies in the areas of access controls, audit and monitoring, and separation of duties that remained unresolved as of September 30, 2014. In particular, SEC resolved 2 weaknesses important to improving its information security by separating the user production network from the internal management network and storing all critical system logs in a centralized location for a key financial system. While SEC had made progress in addressing the remaining 15 of 20 previously reported weaknesses, these weaknesses still existed as of September 30, 2015. These 15 remaining weaknesses encompassed SEC’s financial and general support systems. While SEC had improved its information security by addressing previously identified weaknesses, the information security control weaknesses that continued to exist in its computing environment may jeopardize the confidentiality, integrity, and availability of information residing in and processed by the system. Specifically, the lack of adequate separation among SEC users in different computing environments increases the risk that users could gain unrestricted access to critical hardware or software and intentionally or inadvertently access, alter, or delete sensitive data or computer programs. Weaknesses in SEC’s controls over access control, configuration management, segregation of duties, physical security, and contingency and disaster recovery planning exist in part because SEC did not fully implement its information security program. In particular, SEC did not always review and update its policies in a timely manner, completely document POA&M items and physical inventory, and fully implement and effectively manage its continuous monitoring program. While SEC had no material weaknesses or significant deficiencies over financial reporting, the weaknesses identified could decrease the reliability of the data processed by key financial systems, which the commission relies on to communicate its financial position to Congress and the public. We recommend that the Chair direct the Chief Information Officer to take six actions to more effectively manage its information security program: Review and appropriately update information technology and guidance consistent with SEC policy. Document artifacts that support recommendation closure consistent with SEC policy. Document a comprehensive physical inventory of the systems and applications in the production environment. Review and update current configuration baseline settings for the operating systems. Provide personnel appropriate access to continuous monitoring reports and tools to monitor, evaluate, and remedy identified weaknesses. Institute a process and assign the necessary personnel to review information produced by the vulnerability scanning tools to monitor, evaluate, and remedy identified weaknesses. In a separate report with limited distribution, we are also making 30 recommendations to address newly identified control weaknesses related to access controls, configuration management, segregation of duties, physical security, and contingency and disaster recovery plans. We provided a draft of this report to SEC for its review and comment. In written comments signed by the Chief Information Officer (reproduced in app. II), SEC concurred with the six recommendations addressing its information security program. SEC also stated that the commission had taken action to address one recommendation and described actions to address the other five. This report contains recommendations to you. The head of a federal agency is required by 31 U.S.C. § 720 to submit a written statement on the actions taken on the recommendations by the head of the agency. The statement must be submitted to the Senate Committee on Homeland Security and Governmental Affairs and the House Committee on Oversight and Government Reform not later than 60 days from the date of this report. A written statement must also be sent to the House and Senate Committees on Appropriations with your agency’s first request for appropriations made more than 60 days after the date of this report. Because agency personnel serve as the primary source of information on the status of its open recommendations, we request that the commission also provide us with a copy of its statement of action to serve as preliminary information on the status of open recommendations. We are also sending copies of this report to interested congressional parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. We acknowledge and appreciate the cooperation and assistance provided by SEC management and staff during our audit. If you have any questions about this report or need assistance in addressing these issues, please contact Gregory C. Wilshusen at (202) 512-6244 or [email protected] or Nabajyoti Barkakati at (202) 512-4499 or [email protected]. GAO staff who made significant contributions to this report are listed in appendix III. Our objective was to determine the effectiveness of the Securities and Exchange Commission’s (SEC) information security controls for ensuring the confidentiality, integrity, and availability of its key financial systems and information. To assess information systems controls, we identified and reviewed SEC information systems control policies and procedures, conducted tests of controls, and held interviews with key security representatives and management officials concerning whether information security controls were in place, adequately designed, and operating effectively. This work was performed to support our opinion on SEC’s internal control over financial reporting as of September 30, 2015. We evaluated controls based on our Federal Information System Controls Audit Manual (FISCAM), which contains guidance for reviewing information system controls that affect the confidentiality, integrity, and availability of computerized information; National Institute of Standards and Technology standards and special publications; and SEC’s plans, policies, and standards. We assessed the effectiveness of both general and application controls by performing information system controls walkthroughs surrounding the initiation, authorization, processing, recording, and reporting of financial data (via interviews, inquiries, observations, and inspections); reviewing systems security risk assessment and authorization documents; reviewing SEC policies and procedures; observing technical controls implemented on selected systems; testing specific controls; and scanning and manually assessing SEC systems and applications, including financial systems and related general support system network devices (firewalls, switches, and routers) servers and systems. We also evaluated the Statement on Standards for Attestation Engagements report and performed testing on key information technology controls on the following applications and systems: Delphi- Prism, FedInvest, EDGAR/Fee Momentum, and Federal Personnel and Payroll System/Quicktime. We selected which systems to evaluate based on a consideration of financial systems and service providers integral to SEC’s financial statements. To determine the status of SEC’s actions to correct or mitigate previously reported information security weaknesses, we identified and reviewed its information security policies, procedures, practices, and guidance. We reviewed prior GAO reports to identify previously reported weaknesses and examined the commission’s corrective action plans to determine which weaknesses it had reported were corrected. For those instances where SEC reported that it had completed corrective actions, we assessed the effectiveness of those actions by reviewing appropriate documents, including SEC-documented corrective actions, and interviewing the appropriate staffs. To assess the reliability of the data we analyzed, such as information system control settings, security assessment and authorization documents, and security policies and procedures, we corroborated them by interviewing SEC officials and programmatic personnel to determine whether the data obtained were consistent with system configurations in place at the time of our review. In addition, we observed configuration of these settings in the network. Based on this assessment, we determined the data were reliable for the purposes of this report. We performed our work in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provided a reasonable basis for our findings and conclusions based on our audit objective. In addition to the contacts named above, GAO staff who made major contributions to this report are Michael Gilmore, Hal Lewis, and Duc Ngo (Assistant Directors), Angela Bell, Lee McCracken, and Henry Sutanto.
The SEC is responsible for enforcing securities laws, issuing rules and regulations that provide protection for investors, and helping to ensure that the securities markets are fair and honest. In carrying out its mission, the SEC relies on computerized information systems to collect, process, and store sensitive information, including financial data. Having effective information security controls in place is essential to protecting these systems and the information they contain. This report details weaknesses GAO identified in the information security program at SEC during its audit of the commission's fiscal years 2015 and 2014 financial statements. GAO's objective was to determine the effectiveness of information security controls for protecting the confidentiality, integrity, and availability of SEC's key financial systems and information. To do this, GAO examined information security policies, plans, and procedures; tested controls over key financial applications; interviewed agency officials; and assessed corrective actions taken to address previously reported weaknesses. The Securities and Exchange Commission (SEC) improved its information security by addressing weaknesses previously identified by GAO, including separating the user production network from the internal management network. However, weaknesses continue to limit the effectiveness of other security controls. In particular: While SEC had issued policies and implemented controls based on those policies, it did not consistently protect access to its systems. Organizations should design and implement controls to prevent, limit, and detect unauthorized access to computer resources. The commission did not consistently protect its network from possible intrusions, identify and authenticate users, authorize access to resources, audit and monitor actions taken on its systems and network, and restrict physical access to sensitive assets. The commission did not consistently manage the configuration of its systems. Configuration management includes ensuring that hardware and software are configured with appropriate security features and that changes are systematically controlled. However, SEC did not maintain and monitor official configuration baselines for its financial systems and general support system. The commission did not always appropriately separate incompatible duties. Separation of duties involves dividing responsibilities so that a single individual does not control all critical stages of a process. However, SEC did not adequately separate duties among its three computing environments. While SEC had developed contingency and disaster recovery plans for its information systems, those plans were not fully reviewed, completed, or up-to-date. Contingency and disaster recovery planning are essential to resuming operations in the event of a disruption or disaster. These weaknesses existed in part because SEC had not fully implemented an organization-wide information security program, as called for by federal law and guidance. In particular, the commission had not (1) consistently reviewed and updated its information security policies in a timely manner, (2) completely documented plans of action to address weaknesses, (3) documented a physical inventory of its systems and applications, and (4) fully implemented a program to continuously monitor the security of its systems and networks. Finally, of 20 weaknesses previously identified by GAO that remained unresolved as of September 30, 2014, SEC had resolved 5 and made progress in addressing the other 15 as of September 30, 2015. Two resolved weaknesses were important to improving SEC security. Collectively, these weaknesses increase the risk that SEC's systems could be compromised, jeopardizing the confidentiality, integrity, and availability of sensitive financial information. While not constituting material weaknesses or significant deficiencies, they warrant SEC management's attention. In addition to the 15 prior recommendations that have not been fully implemented, GAO is recommending that SEC take 6 additional actions to more fully implement its information security program. In a separate report with limited distribution, GAO recommended SEC take 30 actions to address newly identified control weaknesses. SEC concurred with GAO's recommendations.
SNAP is the largest of the 15 domestic food and nutrition assistance programs overseen by USDA’s Food and Nutrition Service (FNS). FNS jointly administers SNAP with the states. FNS pays the full cost of SNAP benefits and pays approximately half of states’ administrative costs.is also responsible for promulgating program regulations and ensuring that state officials administer the program in compliance with program rules. States administer the program by determining whether households meet the program’s eligibility requirements, calculating monthly benefits for qualified households, and issuing benefits to participants. As shown in figures 1 and 2, SNAP participation and costs generally increased between fiscal years 2001 and 2011, though the most significant increases began in fiscal year 2008. According to FNS, the growth in SNAP participation in recent years is likely attributable to the economic recession, outreach efforts, and modifications to program policy. Because households must be low- income to receive SNAP benefits, participation and costs typically increase during economic downturns as more people become eligible and apply. Although the recent recession officially lasted from December 2007 through June 2009, since then, unemployment has remained above average levels and SNAP participation has continued to grow. Further, because federal law identifies SNAP’s main purpose as “raising levels of nutrition among low-income households,” one of the key performance measures for the program is the rate of participation among eligible households. As a result, for years, FNS has encouraged states to undertake outreach efforts and adopt various modifications to program policy to increase participation among the eligible population and increase program efficiency. Although the participation rate varies by state, ranging from an estimated 53 percent in California to 100 percent in Maine in fiscal year 2009, the national rate has been about 70 percent in recent years. Under federal law and regulations, eligibility for SNAP is based primarily on a household’s income and assets. A household generally includes everyone who lives together and purchases and prepares meals together. To determine a household’s eligibility, a caseworker must first determine the household’s gross income, which cannot exceed 130 percent of the federal poverty guidelines, and its net income, which cannot exceed 100 percent of the guidelines (or $18,530 annually for a family of three living in the continental United States in fiscal year 2012). Net income is determined by taking into account certain exclusions and deductions, for example, expenses for dependent care, utilities, and housing. In addition, a caseworker must determine a household’s assets under various requirements. For example, a household’s liquid assets, such as those in a bank account, currently cannot exceed $2,000 or, for households with an elderly or disabled member, $3,250. However, certain assets are not counted for SNAP, such as a home, the surrounding lot, and most retirement plans and educational savings accounts. While there are also federal SNAP provisions that limit the value of vehicles an applicant can own and still be eligible for the program, all states have opted to modify those rules, and most exclude the value of all household vehicles. (See figure 3 for a general depiction of the eligibility determination process under federal SNAP rules.) Federal law also allows certain households to be deemed categorically eligible for SNAP. Under statute, households receiving monthly cash assistance from certain programs—including TANF, SSI, and state or local general assistance programs—are categorically eligible for SNAP. According to USDA, categorical eligibility can increase program access and reduce administrative burden, as states assess a household’s eligibility once for the cash assistance program rather than twice for both the cash assistance program and SNAP. (See figure 4 for a general depiction of the eligibility determination process under traditional cash assistance-related categorical eligibility.) In response to welfare reforms under the Personal Responsibility and Work Opportunity Reconciliation Act of 1996, USDA advised states that households authorized to receive non-cash services—such as case management services, transportation subsidies, or child care subsidies— from a program funded with TANF dollars could also be deemed categorically eligible. In order for a state to fund a non-cash service with TANF dollars, the service generally must further one of TANF’s purposes, which include the promotion of job preparation, work, and marriage, and the reduction of out-of-wedlock births. As set out in SNAP regulations, households in which members are authorized to receive non-cash services primarily funded with TANF are categorically eligible, and states also have the option of extending categorical eligibility to households receiving services that are less than 50 percent TANF-funded, with FNS approval required in certain cases. SNAP regulations also direct that the TANF-funded non-cash services used to confer categorical eligibility be available only to households with incomes equal to or below 200 percent of the federal poverty guidelines. As a result of this expansion of categorical eligibility, states have adopted a variety of policies to deem households that receive non-cash services from TANF-funded programs eligible for SNAP. FNS separates these types of policies into two groups—broad-based and narrow. According to FNS, BBCE policies make most, if not all, households that apply for SNAP categorically eligible because they receive a TANF-funded non- cash service, such as an informational brochure or toll-free number. (See figure 5 for a general depiction of the eligibility determination process under BBCE.) In contrast, narrow categorical eligibility policies require households to be enrolled in certain TANF-funded programs, such as employment assistance, or receiving child care or transportation assistance, in order to be categorically eligible for SNAP. Although FNS issued guidance in 1999 and regulations in 2000 explaining how states could adopt BBCE policies, relatively few states implemented them early on. Between fiscal years 2001 and 2006, 7 states adopted these policies. However, when the recent economic downturn began, and households applying for SNAP began to increase greatly, FNS encouraged states to adopt these policies to streamline eligibility processes and ease workload (see fig. 6). According to FNS, as of May 1, 2012, 43 states—including Washington, D.C., Guam, and the Virgin Islands—had BBCE policies. These policies differ in terms of the income and asset limits used to determine eligibility, as shown in table 1. For example, 24 states’ BBCE policies increase the federal gross income limit for SNAP and remove the asset limit while 2 states’ BBCE policies retain the federal gross income limit and increase the federal asset limit. After eligibility is established, benefits are determined based on each household’s monthly net income, with greater benefits provided to those with less income. SNAP expects each eligible household to spend 30 percent of its own resources on food, and therefore, each household’s monthly SNAP benefit is determined by subtracting 30 percent of its monthly net income from the maximum SNAP benefit for the relevant household size. All eligible one- and two-person households are guaranteed a minimum benefit, which is $16 for households in the continental United States in fiscal year 2012.three or more members do not receive a minimum benefit. Under federal income eligibility limits, a household with three or more members will typically be determined eligible for a SNAP benefit greater than $0. However, because some states’ BBCE policies raise the SNAP income limits, under these policies, such households are more likely to be deemed eligible for $0 in benefits. SNAP households are certified eligible for SNAP for periods ranging from 1 to 24 months, which vary based on state policy choices. Generally, the length of the certification period depends on household circumstances, but only households in which all members are elderly or disabled can be certified for up to 24 months under federal regulations. Once the certification period ends, households reapply for benefits, at which time eligibility and benefit levels are redetermined. Between certification periods, households generally must report changes in their circumstances—such as household composition, income, and Since expenses—that may affect their eligibility or benefit amounts. early 2001, states have had the option of requiring households to report changes only when their incomes rise above 130 percent of the federal poverty guidelines, rather than reporting changes at regular intervals or within 10 days of occurrence, as was required in the past.FNS, as of November 2010, all states except California and Wyoming use this simplified reporting for some or all SNAP households. FNS and the states share responsibility for implementing an extensive quality control system used to measure the accuracy of SNAP eligibility and benefits and from which state and national error rates are determined. Under FNS’s quality control system, the states calculate their payment errors annually by drawing a statistical sample to determine whether participating households are eligible and received the correct benefit amount. Because SNAP considers many factors in determining each household’s benefit amount, any of these factors can result in a payment error. For example, incorrect calculations of earned income or unearned income and inaccurate accounting of the number of household members may cause payment errors. The state’s payment error rate is based on the sample and determined by dividing the dollars paid in error by the total SNAP benefits issued. Once the payment error rates are determined, FNS is required to compare each state’s performance with the national payment error rate and impose financial penalties or provide financial incentives according to legal specifications. For more information about our analysis, see appendix I. As previously noted, household eligibility is determined by local staff administering SNAP, and the accuracy of those determinations is assessed by state and federal reviewers. We did not independently determine households’ eligibility. In April 2012, the Congressional Budget Office (CBO) released a report on SNAP that estimates a 4.3 percent annual reduction in SNAP participants over the 2013-2022 period if federal SNAP income and asset limits were applied to all categorically eligible households. However, CBO’s methodology for producing its estimates differs from our methodology in several ways. For example, although CBO indicates its estimates reflect changes to BBCE, the Office’s estimates include both participants deemed eligible under BBCE policies, as well as those deemed eligible under narrow non-cash categorical eligibility policies. In addition, CBO estimates include assumptions about the share of these households that exceed federal asset limits. which is about 150 percent of the federal poverty guidelines,the federal income limit for SNAP is 130 percent of the guidelines. Households eligible under BBCE with incomes over the federal limits had characteristics that were generally similar to all other SNAP households; however, they were more likely to be working or receiving unemployment benefits (see table 2). About half of these households included a child, as was the case for all other SNAP households, and a similar proportion of each group of households included a single female head of household. While a generally similar proportion of both groups of households were elderly recipients of Social Security benefits, the average monthly amount of Social Security benefits received by households that would have failed the federal income tests was substantially higher. In two of the local offices we visited, staff noted that BBCE may have increased the number of elderly applicants since the policy change enabled some who were previously ineligible because of their Social Security earnings to become eligible for SNAP. Also, although both groups had the same proportion of households with unearned income, a higher percentage of households with incomes above the federal limits had members who worked or who received Unemployment Insurance benefits. Further, the average monthly amount they received in Unemployment Insurance was considerably higher than that received by all other SNAP households. Available data suggest few households that qualified for SNAP under BBCE likely had assets that would have exceeded federal asset limits. In fiscal year 2010, 37 states had removed the federal asset limit, which was $2,000 for most households, as part of their BBCE policies. Because asset information was therefore not collected from SNAP applicants in these states, USDA’s data on SNAP households cannot be used to estimate the number or share of participating households with assets over the federal limits. However, other national data sources suggest the number is relatively small. For example, a national survey that gathered information on families’ assets in 2010 found that an estimated 24 percent of families in the bottom income quintile did not have a checking, savings, or other financial transaction account. Among the estimated 76 percent of this group that had such an account, the median balance was an estimated $700. This survey also found that while a greater proportion of families in the second lowest income quintile had such an account in 2010 (91 percent), their median account balance was an estimated $1,500. For the most part, SNAP households deemed eligible under BBCE—households with incomes under 200 percent of the federal poverty guidelines—fall within the two lowest income quintiles. A 2007 survey of families with children found that those with incomes between 100 and 199 percent of the federal poverty guidelines held median liquid assets of around $300. For those with incomes below 100 percent of the guidelines, the median amount was estimated to be zero. Available state-level data, as well as information shared by state and local officials during our site visits, also suggest the value of assets held by SNAP households is low. For example, according to state officials in Idaho and Michigan, both states initially removed the federal asset limits as part of their BBCE policies but reinstated an asset limit of $5,000 during 2011. Officials indicated that the new limits had a very small impact on overall caseloads. For example, during the 9 months following Idaho’s reinstatement, approximately 850 new applicants and existing recipients seeking recertification were denied benefits because their assets exceeded the asset limit. This represented less than a 1 percent reduction in the total number of SNAP households in that state during that period. Similarly, during the month following Michigan’s reinstatement of the asset limit, about 1 percent of the state’s existing SNAP cases were closed due to assets. Further, during our site visits, caseworkers in all of the offices we visited said they believe the value of assets held by SNAP households is usually very low or $0. Several caseworkers said that while they may have served SNAP applicants that held assets greater than the federal limits, they believe such instances are rare. Many caseworkers noted it is common to hear from applicants that they have exhausted a significant portion of their available assets before applying for SNAP. While implementation of BBCE by many states has enabled more households to receive SNAP, the nation’s recent economic downturn has likely played a larger role in the increases in participation during the past decade. As shown in figure 8, increases and decreases in SNAP participation often coincide with similar changes in unemployment and poverty. A 2002 USDA study found that during past economic recessions, a 1 percentage-point increase in the national unemployment rate has been associated with an increase in the number of SNAP participants of 1 to 3 million. This relationship also existed during the most recent economic recession of 2007-2009, which was marked by a steep rise in the nation’s unemployment rate and an increase in the proportion of families living in poverty. Between fiscal years 2007 and 2010, the number of SNAP participants rose by around 14 million (or approximately 54 percent), while the unemployment rate increased by 5 percentage points. This relationship was also noted by staff administering SNAP in all 18 local offices we visited who cited the economic downturn, and related unemployment, as the primary cause of the increases in SNAP participation in their localities. Federal changes to SNAP, as well as those initiated by individual states, and a shift in public perception of the program, have also likely contributed to increases in participation during the past decade. For example, the Recovery Act implemented a 13.6 percent increase in maximum monthly SNAP benefits,program more attractive to eligible households. In addition, the simplified reporting option, which most states have implemented since it became available in 2001, has been linked to increased participation, likely because it reduces the administrative burden for SNAP households and lengthens certification periods. Further, USDA expenditures targeted to which likely made participation in the state and community outreach efforts, as well as relaxed limits on vehicle ownership, have been linked to increased SNAP participation. During our site visits, officials noted that the Recovery Act’s suspension of the 3- month time limit for able-bodied adults without dependents also caused a noticeable increase in SNAP participants. In addition, individual states have implemented program changes that may have increased participation, such as taking steps to make it easier to apply for SNAP. For example, staff in most of the states we visited cited implementation of online applications and phone interviews, instead of in-person interviews, as improving access to SNAP and shifting the public’s perception of the program. Some local caseworkers noted that being able to apply without going to a public assistance office lowers the stigma associated with receipt of government assistance. These changes may also be encouraging participation among specific age groups, as local caseworkers across several states we visited described an increasing trend of single people aged 22 applying as their own SNAP households. Several studies have examined the impact of various changes on SNAP participation, though it is difficult to measure the precise impact of any single change. Although SNAP households that had incomes over the federal limits made up an estimated 2.6 percent of the SNAP caseload in fiscal year 2010, this group received an estimated 0.7 percent of all SNAP benefits. These benefits totaled an estimated $38.3 million a month, or approximately $460 million annually. In the group of states that increased the federal SNAP gross income limit with their BBCE policies, benefits provided to households that had incomes over the federal limits were an estimated 1.5 percent of all SNAP benefits (see fig. 9). Due to data limitations, these estimates represent minimums, as they do not include benefits provided to SNAP households deemed eligible under BBCE with assets over the federal SNAP asset limits. Because SNAP benefits are calculated based on income and expenses, and provide greater benefits to those with fewer means, those with incomes over the federal limits tend to be eligible for fewer benefits. On average, these households received an estimated $81 average monthly SNAP benefit in fiscal year 2010 compared to an estimated $293 average monthly benefit received by all other SNAP households in that year. These households also disproportionately received the minimum benefit of $16. An estimated 44 percent of these households received the minimum benefit compared to 3 percent of all other households. Households eligible solely because of BBCE had higher average deductions in certain categories—including dependent care and child support expenses—than other households in fiscal year 2010 (see table 3), and deductions increase monthly SNAP benefits. However, in general, the higher incomes of households eligible solely because of BBCE seem to have had a greater impact on their SNAP benefits than their deductions, given the relatively low average benefits they received. Both the cost of total SNAP benefits and the average benefit per household increased over the last decade while many states were implementing BBCE; however, other factors likely had a greater effect on benefit costs (see fig. 10). The annual adjustment made to the Thrifty Food Plan—which is the basis for the maximum SNAP benefit amounts, as well as changes in the economy, demographics, and policies affecting deductions, outreach, and eligibility can all affect total spending on SNAP benefits. In recent years, the recession drove increased benefit costs, both by changing household circumstances and by increasing the benefit cost per household. For example, because household benefits are primarily determined based on each household’s monthly income, increases in the poverty and unemployment rates likely correlate with increases in the average benefit provided to households. In addition, as previously noted, the Recovery Act implemented a 13.6 percent increase in the maximum monthly SNAP benefit per household. During our site visits, some officials cited these changes as key factors that impacted household benefits in recent years. Officials we spoke to also noted that the slow economic recovery has led to SNAP households remaining on the program for longer time periods than before the recession, which can lead to increases in total benefit costs. Because many factors impact SNAP benefit costs, the full extent of BBCE’s impact is unclear, though evidence suggests other factors played a more important role in recent years. Although BBCE may impact SNAP benefit costs because the policy both expands who is eligible for the program and streamlines the process for receiving benefits, state and local officials we met with consistently indicated that they did not think BBCE had a significant impact on benefits. Further, our analysis of SNAP household data suggests factors beyond BBCE typically have a greater impact on benefits. For example, in our review of SNAP benefits in the group of 17 states that implemented BBCE during fiscal year 2009, we found that the average monthly benefit per household significantly increased in all of these states between fiscal years 2008 and 2010. However, for most of these states, the increases were likely primarily related to the increase in maximum benefits implemented under the Recovery Act, as we found no significant differences in the two factors used to determine benefit amounts—net income and household size—for those years. Many factors affect SNAP administrative costs, and state BBCE policies are one factor that may help reduce such costs. Studies have shown that factors ranging from a state’s economy and demographic characteristics to its SNAP policies, administrative processes, staff salaries, and the use of technology all impact state administrative spending to varying degrees. As we previously reported, because categorical eligibility policies simplify the eligibility determination process by creating consistency in income and resource limits across programs, these policies can save resources, improve productivity, and help staff focus more time on performing essential program activities.local offices we visited stated that, before BBCE was implemented, verifying assets often took a considerable amount of time, and state officials added that it could be costly, as banks sometimes charge SNAP offices a fee to provide account documentation. As a result, staff in almost all of the local offices we visited said BBCE’s removal of the SNAP asset limit helped streamline case processing, and some noted that streamlining occurred both because SNAP households did not have to During our site visits, staff in many of the provide documentation of assets and caseworkers did not need to verify asset information. Consistent with annual increases in SNAP participation and benefit costs between fiscal years 2001 and 2010, SNAP administrative costs generally increased annually during this period, though at a lower rate. Certification costs—a sub-set of SNAP administrative costs that include the cost of staff determination of household eligibility for benefits—also generally increased over this period (see fig. 11). Cost increases in recent years are likely directly related to the $690.5 million in extra federal funding for SNAP administrative costs provided to states through the Recovery Act and the Department of Defense Appropriations Act, 2010, in response to the national economic recession. However, despite this additional federal funding, because administrative costs increased at a lower rate than SNAP participation, administrative costs per SNAP household declined during this period (see fig. 12). While many states implemented BBCE during this period, the largest decreases in these costs occurred in recent years when the economic recession was also a factor. Specifically, during the recent recession, states faced budgetary constraints to funding SNAP administrative expenditures. Because states pay for approximately half of these expenditures, when state tax revenues decrease during recessions, state balanced budget requirements and other constraints affect state and local governments’ ability to provide services at the same time that demand for services increases. In our site visits to five states, officials frequently noted how overwhelmed local SNAP caseworkers have been with the increased workload during the recent recession. They noted that workload increases have been driven by increases in SNAP participation and the amount of time households remain on the program, as well as budget constraints that hinder their offices’ ability to hire additional staff. Across the seven local offices we visited in states that adopted BBCE during the recent recession, staff noted that while BBCE helped streamline the processing of individual cases, these improvements were offset by the increased workload. However, some staff indicated they believe reinstating the federal SNAP asset test that was removed under BBCE would make their workload unmanageable. In addition to the recession, other changes that states have made to simplify and ease program administration during the last decade make it difficult to determine BBCE’s full impact on administrative costs. For example, state and local officials frequently cited the implementation of reduced reporting requirements under the simplified reporting option, the conversion of case files from paper to electronic formats, the implementation of online SNAP applications, and increased use of phone interviews as changes that also helped to ease staff workloads. Officials in one state we visited noted that while these changes may have helped to reduce administrative expenditures over time, some, like BBCE, may have resulted in increased spending in the short-term due to the need for training and modifications to computer systems. Further, while most state SNAP officials we met with during our site visits felt that BBCE likely decreased administrative expenditures to some extent, they did not know the policy’s actual impact because of other changes. In recent years, the SNAP payment error rate declined to an historic low while multiple program changes occurred, including BBCE, but evidence suggests that factors other than BBCE may have played a larger role in the decline. Between fiscal years 2000 and 2010, USDA reported that the national payment error rate—the percentage of SNAP benefits paid in error, including underpayments and overpayments—fell from 8.91 percent to 3.81 percent as the number of states with BBCE policies increased from 0 to 39. Because most states’ BBCE policies eliminated the need to confirm that SNAP household assets fall below certain limits, BBCE effectively removed the potential for asset-related errors in these states. However, USDA data indicate that most errors have been caused by factors other than assets in recent years. In fact, fewer than 4 percent of all error cases nationally have been caused by asset errors since 2000. Therefore, it is likely that other factors had a greater impact on error rates during this time. For example, the number of states adopting the simplified reporting option for at least some SNAP households increased during this period. Because this option eliminates substantial paperwork requirements for participants and states, and reduces the number of times income is verified, states experience fewer related errors. In addition, states we visited reported that they had also made other changes during this period to help lower their error rates, such as incorporating the use of technology with new case management models or digital files. Further, both our analysis of USDA data and our discussions with SNAP staff suggest that BBCE may, in fact, contribute to more payment errors. Although BBCE has been promoted by USDA as a possible means to reduce errors, we found that a greater percentage of SNAP households eligible under BBCE that had incomes over the federal limits had payment errors than other households (17.2 percent compared to 6.7 percent) in fiscal year 2010. This may be related to the fact that these households were significantly more likely to have earned income, and income is a frequent cause of SNAP payment errors. In addition, while most states’ BBCE policies removed a potential source of error by eliminating asset limits, SNAP caseworkers we spoke with told us that a reduction in the level of verification they perform may actually increase the potential for errors as well as fraud. For example, staff in two states reported that removing asset verification under BBCE has reduced their ability to investigate other applicant information for possible inconsistencies. Specifically, while asset verification often took considerable time to perform, they noted that previously reviewing bank accounts gave them the ability to identify regular deposits that may be income to ensure those were reported by the applicant. Beyond changes due to BBCE, caseworkers in several states we visited suggested there has been a cultural shift towards an overall reduction in the level of verification and investigation they perform, in part because of the increased participation and workload related to the recent recession. They expressed concern about maintaining a balance between providing assistance to those who need it and ensuring program integrity, noting they worry about losing access to information to help ensure integrity. While federal rules provide states with considerable flexibility in designing their BBCE policies, gaps in federal oversight may contribute to some unintended consequences for SNAP and related programs’ integrity. We found unintended consequences relating to three key areas: provision of a TANF-funded service, direct certification for free school meals, and requirements for categorically eligible households to report changes (in household circumstances). Our visits to states suggest that SNAP applicants are not consistently receiving the TANF-funded information required to confer categorical eligibility and that the extent to which this information is TANF-funded is unclear. According to USDA, BBCE policies make most households that apply for SNAP categorically eligible because they receive a TANF- funded service, such as an informational brochure or toll-free number, as long as the household’s income is within the state’s specified income limit (see fig. 13).caseworkers told us they did not consistently provide the guide to However, in one state we visited, some local SNAP services brochure to all applicants. In another, staff said that at the applicant’s request, and/or if caseworkers think there is a need, they will provide referrals to services. In a third state we visited, applicants were directed on the SNAP application to call a toll-free number to receive an informational brochure on services; however, we were unsuccessful in obtaining this brochure after repeated (5) attempts to call the number listed. Further, according to USDA, states must use TANF funds to pay for either the document households receive or the services mentioned in the document. If states use TANF funds to cover at least 50 percent of the cost, they do not need to obtain USDA approval of their BBCE policies. While SNAP officials in three of the states we visited confirmed that the documents used to confer categorical eligibility are partially TANF-funded, they did not know the exact percentage of TANF dollars used to fund them. Gaps in USDA’s oversight of states’ procedures for implementing BBCE may contribute to the inconsistencies we found in providing qualified applicants with the TANF-funded information or service that confers BBCE. While USDA has issued guidance over the past 3 years in response to various state questions about BBCE, the agency’s documentation requirements for states that adopt it are limited. According to agency guidance, while states must document that a household was determined categorically eligible, USDA does not require states to document that the TANF-funded service was received by applicants. As a result, in a state where a document, such as a brochure, is used to confer eligibility, the state does not have to verify that it has provided the document to applicants as part of the eligibility determination process. In addition, headquarters and regional USDA officials told us the agency does not request documentation from states on the extent of TANF funding used, even though that information is necessary to determine whether a BBCE policy would require agency approval. Agency officials added that the burden is on the states to let USDA know if approval is needed. Agency officials also told us that while they provide technical assistance to states, as needed, on the development of their BBCE policies and collect summary information on states’ BBCE provisions, they do not approve state BBCE policies. Because states have flexibility to decide how to treat SNAP households deemed eligible for $0 in benefits—an outcome more likely under BBCE—some children have been inappropriately certified for free school meals, including in two states we visited. Under SNAP, states have been allowed to decide whether to deny eligibility to households who qualify for $0 in benefits or whether to certify these households SNAP-eligible without benefits. For school meals programs, statute indicates states must certify children in households that receive SNAP benefits eligible for free school meals—a process called direct certification that is designed to ease administrative burden when certifying children for multiple assistance programs with similar eligibility criteria. Many states rely on data matches between their SNAP program and district-level school data to identify children eligible for direct certification, and beginning in school year 2012-2013, all states are expected to do so. However, because a state can certify families receiving $0 in SNAP benefits as eligible in its SNAP data system, it can directly certify children in such families for free school meals, even though they do not receive SNAP benefits. This practice occurred in two states we visited. SNAP officials in one state told us the state adopted BBCE, in part, to potentially enable more children to become eligible for free school meals. Local caseworkers in that state similarly said that they believe parents apply for SNAP specifically because they know their child(ren) are eligible for free school lunch even if they are deemed eligible for $0 in SNAP benefits. In recognition of this practice, in 2011, USDA issued guidance for states through its regional offices reiterating that children in households receiving $0 in benefits are not categorically eligible for free school meals and therefore should not be directly certified; however, officials in the states we visited were unaware of this guidance. In its October 2011 memorandum, USDA further suggested that state SNAP agencies work with their school meal agency counterparts to ensure that children from $0 benefit SNAP households are excluded from direct certification as soon as possible. According to USDA, school meal agencies were to be in compliance with this guidance by July 1, 2012. USDA’s regional offices representing the states we visited told us they routinely transmit guidance and policy changes to states from the national office. This guidance was also made available on USDA’s Web site. However, in June 2012, we followed up with the two states we visited that had been directly certifying children from $0 benefit SNAP households, and state officials indicated the practice was still occurring, as they were not aware of this guidance from USDA. Letter to Program Directors-All Regions, “National School Lunch Program and Direct Certification with SNAP,” signed by the Director of the Program Development Division, FNS, USDA. October 25, 2011. The practical result is that direct certification of students from families eligible for SNAP but entitled to $0 benefit should not continue in the 2012- 13 school year. Direct certification of children in categorically eligible SNAP households creates another unintended consequence—one of effectively increasing the income eligibility limit for free school meals for some children. While the federal gross income-eligibility limit for SNAP aligns with that of the school meals programs—providing free meal benefits to children in households at or below 130 percent of the federal poverty guidelines— the programs no longer align in states with BBCE policies that have raised the SNAP gross income limit. In the 27 states with BBCE gross income limits between 160 and 200 percent of the federal poverty guidelines, children in categorically eligible households may receive free school meals when, under traditional federal rules, they would not qualify for free meal assistance. In short, through their BBCE policies, some states have effectively increased the income eligibility limits for two key federal nutrition assistance programs. In states that have adopted BBCE, requirements for reporting changes in household income or household size can vary, resulting in unequal treatment of households. Under simplified reporting rules adopted by households are required to report changes in income nearly all states, between scheduled reporting periods only if income exceeds the federal SNAP gross income limit—130 percent of the federal poverty guidelines. Because of BBCE, however, 27 states have, in effect, changed their SNAP gross income limit to levels greater than 130 percent. USDA issued SNAP guidance on change reporting requirements clarifying that, in states with simplified reporting, categorically eligible households with gross incomes over 130 percent of the federal poverty guidelines at the time of certification have no federal SNAP reporting requirements until they recertify or file a periodic report. While guidance further indicates that states may choose to require these households to report when their gross income exceeds the income limit of the TANF program that confers categorical eligibility, they are not required to do so. Two states we visited do not require households with incomes above 130 percent of the poverty guidelines to report changes in income between reporting periods. This results in lower-income SNAP households having a greater reporting burden than higher income SNAP households in order to retain their benefits. While USDA has issued guidance to states in this area, its guidance relies on TANF reporting requirements that do not exist. USDA officials told us that TANF rules require categorically eligible SNAP households to report to TANF when their incomes exceed the income limit of the TANF service used to confer BBCE. However, because BBCE households are often authorized to receive a TANF-funded service through a brochure or toll-free telephone number given to them by a SNAP office, they may not be aware of any related TANF reporting requirements. Further, as we have previously reported, state TANF agencies are not required to collect data on many recipients of TANF-funded services, which include BBCE households. Accordingly, a state TANF agency would not seek information on these households’ income changes in order to share that information with the SNAP agency. In response to the recent economic downturn and prolonged recovery, the Supplemental Nutrition Assistance Program has grown to provide unprecedented numbers of low-income households with benefits for food assistance. While the substantial increases in SNAP participation led to concerns that the large number of states adopting BBCE policies in recent years may have been a driver of those increases, these policies have had only a modest impact on program participation. Further, SNAP generally continues to serve households with the same types of characteristics it always has, and is intended to. As federal and state governments face mounting fiscal pressures and confront limited resources, ensuring the integrity of SNAP and other programs spending public dollars is critical. While USDA touted BBCE as a way to improve program integrity and administrative efficiency, state adoption of BBCE has created unintended consequences that may weaken both SNAP and related programs’ integrity and introduce inequities. First, because gaps exist in USDA’s review of states’ procedures for implementing BBCE, some states are deeming households eligible under BBCE without following the required steps to do so. In addition, it is not known whether states are following the funding requirements associated with these policies. Second, because USDA’s guidance clarifying children’s eligibility for free school meals when their families receive $0 in SNAP benefits—an outcome likely more common because of BBCE—has not reached all states, school meal programs are vulnerable to overpayments and abuse. Finally, USDA’s guidance on SNAP reporting requirements has resulted in lower-income households eligible under federal SNAP rules having to do more to retain their benefits than higher-income SNAP households eligible solely because of states’ BBCE policies. While these unintended consequences of BBCE on SNAP program integrity are potentially significant, they may also be easily addressed by those overseeing and administering the program. At a time when the economy has left more in need of assistance, SNAP continues to help low-income households obtain adequate nutrition. As a result, any changes to BBCE should carefully weigh the potential benefits and costs, which at this time include the increased burden on state and local staff who are already stretched thin as a result of decreased budgets and staff resources. To improve SNAP program integrity and oversight, we are recommending that the Secretary of Agriculture require FNS to take several actions: Review state procedures for implementing BBCE, specifically those in place for providing the relevant TANF-funded service to all SNAP applicants deemed eligible under BBCE, as well as ensuring the relevant service is funded with TANF dollars. Disseminate the agency’s October 2011 guidance clarifying that children in households certified as eligible for $0 in SNAP benefits should not be directly certified to receive free school meals directly to state agencies administering SNAP. Revisit agency guidance on change reporting requirements to ensure that all households, including those deemed eligible under BBCE with incomes above the federal gross income limit, are treated equitably. We provided a draft of this report to USDA for review and comment. On July 16, 2012, the Associate Administrator for SNAP and other FNS officials provided us with the agency’s oral comments. Officials stated that they were in general agreement with the findings and recommendations presented in the report and offered technical comments that we have incorporated as appropriate. Officials also discussed the positive impacts BBCE has had on SNAP, including state administrative relief and cost savings, and emphasized our finding that BBCE policies have generally not changed the characteristics of SNAP households. As a result, the program continues to serve those it is intended to. Officials also noted their agreement with our conclusion that BBCE’s benefits should be considered when assessing changes to these policies. Concerning our finding on the percentage of SNAP households with incomes over the federal limits that had payment errors, officials noted that these households may be more likely to have benefit errors than other SNAP households because they have greater earned income and deductions— factors that have been found to increase the likelihood of errors. We agree, and our findings on the characteristics of this sub-group of households support that conclusion. Further, officials suggested that the total amount of benefit dollars provided in error to this sub-group of households is likely relatively small because the average monthly benefit provided to these households is much smaller than the average benefit provided to all other SNAP households. Because of this, officials believe that errors in these households impact the overall SNAP payment error rate to a small extent, which is supported by the fact that the program’s error rate has been relatively constant in recent years while the number of states with BBCE has increased. While we agree that it is likely that the total amount of benefit dollars provided in error to this sub-group of households is relatively small, we did not develop such an estimate during our analysis of the SNAP quality control data. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees, the Secretary of Agriculture, and other interested parties. In addition, this report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. To determine the prevalence and characteristics of households deemed eligible under states’ broad-based categorical eligibility (BBCE) policies that had incomes over the federal Supplemental Nutrition Assistance Program (SNAP) eligibility limits in fiscal years 2008 and 2010, we analyzed the Food and Nutrition Service’s (FNS) quality control (QC) system data of active SNAP cases. Per federal SNAP requirements, state officials draw monthly random samples of SNAP cases and review them to determine the extent to which households received benefits to which they were entitled. FNS officials in its regional offices and headquarters perform a secondary review of a sub-set of each state’s sample of cases. The weighted analyses of the QC data produce nationally representative results. To identify which households were deemed eligible under BBCE, and the sub-set of BBCE households that had incomes over the federal SNAP eligibility limits, we took several steps. First, we identified which states had BBCE policies in place in fiscal years 2008 and 2010 using an FNS compilation of BBCE policy implementation dates. Based on our discussions with FNS officials, Mathematica Policy Research, Inc. staff, and state and local staff we spoke to during our five site visits, we assumed that once BBCE was enacted by a state, it was used as the default SNAP eligibility policy. Therefore, in states with BBCE policies in the fiscal year analyzed, we considered BBCE households to be those denoted in the QC data as categorically eligible in which all members did not receive cash assistance from another means-tested program. From this group, we determined the sub-set of BBCE households that had incomes over the federal SNAP eligibility limits. We obtained the QC data directly from the QC database, which is made available to the public via Mathematica Policy Research, Inc.’s Web site. FNS contracts with Mathematica Policy Research, Inc. to maintain the SNAP QC data. To analyze the data, we reviewed the technical user’s manual for both the 2008 and 2010 QC public release data sets and evaluated the sampling methodology used to produce the data. We also reviewed the documentation for the internal review and coding process that FNS follows to prepare the QC data. Further, we checked the variables used in our analysis for out-of-range values or outliers. To produce weighted frequencies, weighted percents, and weighted dollar estimates for QC variables at the state and national level, we used the household weight variable provided in the public release QC data set. Because the records in the SNAP QC data are from a random sample, data analysis results are weighted estimates for a population of eligible households and thus are subject to sampling errors associated with samples of this size and type. The QC sample is only one of a large number of samples that states might have drawn. As each sample could have provided different estimates, we expressed our confidence in the precision of our QC data estimates as a 95 percent confidence interval (e.g., plus or minus 10 percentage points). To produce 95 percent confidence intervals around our weighted estimates, we used a statistical software package and an appropriate variance estimation method suitable for the sample design of the QC data. (Appendix II provides the estimates and 95 percent confidence intervals for the data we present in the body of this report.) Through our analysis of the QC data, a review of the technical documentation, and interviews with FNS officials and Mathematica statisticians, we determined that the QC public release data were sufficiently reliable for the purposes of our audit. In addition to conducting our own analysis of the QC data, we reviewed national-level data on SNAP payment error rates—the percentage of SNAP benefits paid in error—available from FNS for fiscal years 2000 to 2010. We also reviewed the primary sources of payment errors from 2000-2010 to help identify the extent to which payment errors were attributable to assets or another source. In addition to the QC data, we reviewed other data on SNAP participation and costs from USDA. Specifically, we analyzed data on average monthly SNAP participation in recent years obtained from USDA reports. In addition, we obtained data on total benefit costs and the average monthly SNAP benefits per household from USDA’s Web site and the annual SNAP State Activity Reports for fiscal years 2001-2011, as well as data on the proportion of households receiving the maximum SNAP benefit from the annual Characteristics of SNAP Households reports for fiscal years 2001-2010. To assess administrative costs, we obtained data on federal and state outlays and obligations for fiscal years 2001-2010 from USDA’s National Data Bank. These data are annually reported by states on the Standard Form 269 in specific cost categories designated by USDA. In addition, we obtained data on state expenditures of the federal Recovery Act funds provided for state administrative expenses in fiscal years 2009 and 2010, as well as the related funds provided through the Department of Defense Appropriations Act, 2010. To better understand the effects of state BBCE policies on SNAP, as well as other factors impacting SNAP, we conducted site visits to 5 states and 18 local offices responsible for administering SNAP in those states, during January and February 2012. The states and localities we visited were Arizona—Maricopa, Pima, and Pinal counties; Illinois—Cook and Lake counties; North Carolina—Cabarrus, Gaston, Lincoln, and Mecklenburg counties; South Carolina—Greenville, Laurens, and Pickens counties; and Wisconsin—Kenosha, Milwaukee, and Racine counties. We selected these states because they varied in their BBCE adoption dates, in the characteristics of their BBCE policies, and in their geographic locations. States selected also had relatively large SNAP caseloads and generally high proportions of their SNAP households deemed eligible under BBCE policies. In each state, we interviewed state officials responsible for administering SNAP, as well as local SNAP administrators and caseworkers at three or four local offices. The local offices we visited ranged from urban to rural areas. During the interviews we collected information about the state’s BBCE policy and its application. In addition, we collected information about recent trends in SNAP participation, benefit amounts, administrative workload, and program errors, as well as BBCE’s impact on each. We also collected information on other economic and non- economic factors that have impacted SNAP. Also, at each local office we observed the office’s general process for serving SNAP applicants, including the forms, documents, and technological systems used, and we gathered information on how BBCE was applied during the process. Lastly, we conducted interviews with federal officials at the USDA regional office associated with each state in order to discuss their role in the oversight of SNAP. We cannot generalize our findings beyond the states and localities we visited. Kathy Larin (Assistant Director), Rachel Frisk (Analyst-in-Charge), Avani Locke, and David Perkins made significant contributions to all aspects of this report. Also contributing to this report were Carl Barden, Marquita Campbell, Susannah Compton, Heather Dunahoo, Greg Kutz, Jean McSween, Mimi Nguyen, Susan Offutt, Rhiannon Patterson, Kathy Peyman, Almeta Spencer, Craig Winslow, and Jill Yost. State and Local Governments: Knowledge of Past Recessions Can Inform Future Federal Fiscal Assistance. GAO-11-401. Washington, D.C.: March 31, 2011. Temporary Assistance for Needy Families: Implications of Caseload and Program Changes for Families and Program Monitoring. GAO-10-815T. Washington, D.C.: September 21, 2010. Supplemental Nutrition Assistance Program: Payment Errors and Trafficking Have Declined, but Challenges Remain. GAO-10-956T. Washington, D.C.: July 28, 2010. Domestic Food Assistance: Complex System Benefits Millions, but Additional Efforts Could Address Potential Inefficiency and Overlap among Smaller Programs. GAO-10-346. Washington, D.C.: April 15, 2010. Food Stamp Program: FNS Could Improve Guidance and Monitoring to Help Ensure Appropriate Use of Noncash Categorical Eligibility. GAO-07-465. Washington, D.C.: March 28, 2007. Human Service Programs: Demonstration Projects Could Identify Ways to Simplify Policies and Facilitate Technology Enhancements to Reduce Administrative Costs. GAO-06-942. Washington, D.C.: September 19, 2006. Food Stamp Program: States Have Made Progress Reducing Payment Errors, and Further Challenges Remain. GAO-05-245. Washington, D.C.: May 5, 2005. Food Stamp Program: Farm Bill Options Ease Administrative Burden, but Opportunities Exist to Streamline Participant Reporting Rules among Programs. GAO-04-916. Washington, D.C.: September 16, 2004. Food Stamp Program: Steps Have Been Taken to Increase Participation of Working Families, but Better Tracking of Efforts Is Needed. GAO-04-346. Washington, D.C.: March 5, 2004. Food Stamp Program: States’ Use of Options and Waivers to Improve Program Administration and Promote Access. GAO-02-409. Washington, D.C.: February 22, 2002. Means-Tested Programs: Determining Financial Eligibility Is Cumbersome and Can Be Simplified. GAO-02-58. Washington, D.C.: November 2, 2001. Food Stamp Program: States Seek to Reduce Payment Errors and Program Complexity. GAO-01-272. Washington, D.C.: January 19, 2001.
Over the last 10 years, participation in the U.S. Department of Agriculture’s (USDA) SNAP, previously known as the Food Stamp Program, has more than doubled, and costs have quadrupled. Since 1999, USDA has allowed states to expand SNAP eligibility by adopting BBCE policies, which make households that receive services funded by Temporary Assistance for Needy Families, such as a toll-free number or brochure, categorically eligible for SNAP. Under BBCE policies, states are able to increase federal SNAP limits on household income and remove limits on assets. Although USDA has encouraged states to adopt BBCE to improve SNAP access and administration, little is known about the effects of these policies. GAO was asked to assess: (1) To what extent are households that would otherwise be ineligible for SNAP deemed eligible for the program under BBCE? (2) What effect has BBCE had on program costs? (3) What are the program integrity implications of BBCE? GAO analyzed data from USDA, selected states, and other national sources; conducted site visits to 5 states; and interviewed federal, state, and local officials, as well as others with knowledge of SNAP. In fiscal year 2010, GAO estimates that 2.6 percent (473,000) of households that received Supplemental Nutrition Assistance Program (SNAP) benefits would not have been eligible for the program without broad-based categorical eligibility (BBCE) because their incomes were over the federal SNAP eligibility limits. The characteristics of these households were generally similar to other SNAP households, although they were more likely to work or receive unemployment benefits. BBCE removes asset limits in most states, and while reliable data on participants’ assets are not available, other data suggest few likely had assets over these limits. Although BBCE contributed to recent increases in SNAP participation, other factors, notably the recent recession, had a greater effect. GAO estimates that BBCE increased SNAP benefit costs, which are borne by the federal government, by less than 1 percent in fiscal year 2010. In that year, total SNAP benefits provided to households that, without BBCE, would not have been eligible for the program because their incomes were over the federal SNAP eligibility limits were an estimated $38 million monthly or about $460 million for the year. These households received an estimated average monthly SNAP benefit of $81 compared to $293 for other households. BBCE’s effect on SNAP administrative costs, which are shared by the federal and state governments, is unclear, in part because of other recent changes that affect this spending, such as state budget and staffing reductions in the recent recession. BBCE has potentially had a negative effect on SNAP program integrity. In recent years, the SNAP payment error rate declined to an historic low, but evidence suggests the decline is primarily due to changes other than BBCE. While BBCE may improve administrative efficiency, both national data and discussions with local staff suggest BBCE may also be associated with more errors. In addition, BBCE has led to unintended consequences for SNAP and related programs. For example, in implementing BBCE, some states are designating SNAP applicants as categorically eligible without providing them with the service required to make this determination. Further, likely because they are unaware of recent USDA guidance, some states certify children for free school meals when their households are determined eligible for SNAP, even though they do not receive SNAP benefits—a result more common in states with BBCE. Finally, because of federal guidance on BBCE, rules for reporting changes in household circumstances now differ by household income level and may leave higher income households without reporting requirements for several months. GAO recommends that USDA review state procedures for implementing BBCE, disseminate guidance to states on certifying SNAP households as eligible for school meals, and revisit its guidance on SNAP reporting requirements to ensure they address all households. USDA generally agreed with GAO’s recommendations.
Basic wireline 911 service provides an easily remembered universal number that connects the caller with an emergency response center, known as a public safety answering point (PSAP) (see fig. 1). The next step after basic wireline 911 service is “enhanced 911” (E911), which automatically routes the emergency call to the appropriate PSAP and transmits to the call taker the telephone number (the “callback number,” should the call be disconnected) and street address of the caller. Nationwide implementation of E911 by local wireline telephone companies, known as “local exchange carriers” (LEC), began in the 1970s without a federal mandate or deadlines governing the rollout. By 1987, 50 percent of the United States’ population could reach emergency services through wireline 911. Today, 99 percent of the population is covered by wireline 911 service, and 93 percent of that coverage includes the delivery of a callback number and location information. In the early 1990s, FCC took note of the rising number of mobile telephone subscribers and the resulting increase in 911 calls. In 1994, FCC requested comments on requiring wireless carriers to provide the same level of 911 service that was available from LECs. In 1996, with input from the industry and public safety community, FCC adopted rules for wireless E911 that established an approach consisting of two phases for implementation by the wireless carriers. FCC also set schedules for implementing both basic and enhanced wireless 911 services, determined accuracy requirements and deployment schedules for location technologies, and outlined the role of PSAPs. Specifically, the phases required the following: Phase I required that by April 1998, or within six months of a request from a PSAP, whichever was later, wireless carriers were to be prepared to provide the PSAP with the wireless phone number of the caller and the location of the cell site receiving the 911 call. Phase II required that by October 2001, or within 6 months of receiving a request from a PSAP, whichever was later, wireless carriers were to be prepared to provide the PSAP with Phase I information plus the latitude and longitude coordinates of the caller within certain standards of accuracy. In 1996, when these rules were established, the technology to accurately locate a caller on a mobile telephone had not yet been perfected, but a “network based” solution was anticipated. With this type of solution, a caller is located through a triangulation process using the closest cell towers. However, as location technology was being developed, a “handset based” solution (i.e., one using the wireless phone itself) was made available. The most common handset solution also relies on triangulation, but uses Global Positioning System (GPS) satellites and a GPS chip inside the handset. In recognition of this second solution, FCC issued rules in October 1999 for carriers that selected handset-based location technologies. In August 2000, FCC adopted modifications to its rules for handset-based solutions and said that even if a PSAP has not made a request for Phase II wireless E911 service, wireless carriers deploying a handset-based solution must ensure that by December 31, 2005, 95 percent of their customers have mobile phones capable of providing automatic location information. A typical wireless 911 call is routed along both wireless and wireline networks before terminating at the PSAP. See figure 2 below. While the voice call is taking place over the wireless and wireline networks, several data queries are simultaneously occurring to determine the caller’s physical location and callback number. With wireless callers, the location information may need to be updated throughout the call to achieve greater accuracy or because the caller is moving during the call. Phase II wireless E911 service is more complex to implement than Phase I because of the need to install equipment to determine the geographic coordinates of the caller, transfer that information through the telephone networks, and have a mapping system in place at the PSAP that can display the latitude and longitude coordinates of the caller as a map location for dispatching assistance. When Phase II location data is unattainable (e.g., the handset does not have line of sight to enough GPS satellites to determine the caller’s location), most wireless systems default to providing Phase I data, including the location of the cell tower and cell sector receiving the call. The increased complexity of Phase II also makes it more costly than Phase I to implement. To date, the federal government has played no role in financing the rollout of wireless E911 services. Wireless carriers must finance the implementation of a caller location solution and test equipment to verify accuracy. LECs are generally responsible for ensuring that all the necessary connections between wireless carriers, PSAPs, and databases have been installed and are operating correctly. PSAPs purchase telephone services from the LECs. Because the typical underlying wireline E911 network is unable to carry the additional wireless E911 information, PSAPs often must purchase a separate data link and connection from the LEC. In order to translate the latitude and longitude location information into a street address, PSAPs usually purchase and install mapping software. PSAPs may also need to acquire new computers to receive and display this information. In short, three parties—the wireless carriers, LECs, and PSAPs—must interconnect and install equipment in order for wireless E911 calls to be completed and the caller location information to be sent with the call. However, no single entity has regulatory authority and oversight over the entire implementation process. FCC has considerable regulatory authority over wireless carriers and has placed location accuracy standards and deployment deadlines on the wireless carriers. State public utility commissions have some authority over wireless carriers’ terms and conditions of service. The state public utility commissions also have a great deal of authority over the LECs, including authority over intrastate service rates, while FCC retains some authority over LEC interconnection agreements with wireless carriers and other issues. PSAP readiness remains a state and local issue because PSAPs serve an emergency response function that has traditionally fallen under state or local jurisdiction. The manner in which the more than 6,000 PSAPs across the country are administered and funded—at a state, county, city, or other political subdivision level—varies from state to state. According to FCC, the Commission has no authority to set deadlines for PSAPs’ deployment of the equipment they need in order to receive caller location information from the wireless carriers. Setting such deadlines on PSAPs would be a matter for states and localities. Another federal agency with an interest in this issue is DOT. According to DOT, its involvement stems from the department’s mandate to handle issues of traffic safety and from a directive from the Secretary of Transportation to become involved in wireless E911 issues. DOT officials noted that wireless phones have become crucial to reporting highway accidents and getting ambulances or other assistance to the scene. As will be discussed below, DOT is involved in several initiatives to track the progress of E911 deployment and help promote wireless E911 services, especially at the state and local level. As the original Phase II deadline of October 2001 approached, the six large national wireless carriers (which provide service to approximately 75 percent of wireless telephone subscribers) requested waivers because the location technology was not ready for implementation. In granting the waivers, FCC negotiated different deadlines with each of these carriers, based on the carrier-specific Phase II compliance plans. The FCC also required these carriers to file detailed quarterly reports regarding implementation. In July 2002, FCC also granted temporary relief from the Phase II deadlines to those non-nationwide midsize and small wireless carriers that had requested relief. Currently, all wireless carriers that have chosen to deploy a handset-based location solution remain under a deadline of having handsets containing location technologies in use by 95 percent of subscribers by December 31, 2005. Yet, despite this deadline, Phase II service is not assured in any area by any specific date. This is because all wireless carriers must respond within 6 months to a PSAP request for the delivery of wireless E911 location information. PSAPs, however, are under no federal deadlines to ever request wireless E911 services. Thus, the full rollout of wireless E911 services nationwide depends in great part on the implementation efforts of the more than 6,000 PSAPs. Based on the best data that is available, nearly 65 percent of PSAPs across the nation have implemented Phase I and 18 percent have implemented Phase II with at least one wireless carrier providing location information. However, there is still a lack of information regarding how many of the more than 6,000 PSAPs will need to upgrade their equipment, making it difficult to accurately measure the progress of wireless E911 implementation. Looking forward, our survey of state 911 contacts found that less than half of them believe that wireless E911 services will be fully in place in their state by 2005. This raises the prospect that E911 implementation will be piecemeal both within states and across the nation for an indefinite number of years to come. Currently, the single best information source for tracking the progress being made in deploying wireless E911 service at the local level comes from DOT and the National Emergency Number Association (NENA). DOT contracted with NENA to create a database of counties and the PSAPs within the counties to provide information about implementation of wireless E911. This database is updated every quarter using wireless carrier information filed with the FCC, and supplemented by data gathered directly from PSAPs. Prior to the creation of this database, the only national data available about PSAPs that existed comprised information about NENA’s membership, and that information did not include all PSAPs or track E911 deployments. Thus, the DOT/NENA initiative has provided a key instrument for measuring wireless E911 implementation. According to NENA, as of October 2003, nearly 65 percent of PSAPs nationwide had implemented Phase I wireless E911 services, which provides the call taker with the callback number and the location of the cell tower and cell sector receiving the 911 call. Phase II, which locates the caller with more precise geographic coordinates, has been implemented with at least one wireless carrier in 18 percent of PSAPs. As part of our survey of state 911 contacts, we asked respondents about their states’ progress on Phase I and Phase II deployments. The responses to our survey were not complete because some state contacts were uncertain about their state’s current status. However, for the 33 states and the District of Columbia from which we did receive responses, we found that percentages for Phase I and Phase II implementation were consistent with NENA’s data. The percentages of counties that have implemented wireless Phase I and Phase II E911 service are illustrated, by state, in figure 3. The percentages are based on GAO’s analysis of NENA data as of October 2003. Measuring the progress of wireless E911 implementation against the goal of full nationwide Phase II deployment depends on being able to compare the number of PSAPs that are receiving wireless Phase II location data with the universe of PSAPs that need to be upgraded. We found, however, that there is a lack of accurate information on the total number of PSAPs that need to be upgraded. NENA has determined that there are 6,143 PSAPs nationwide. However, this number includes both “primary” and “secondary” PSAPs. A primary PSAP is defined by NENA as a PSAP to which 911 calls are directly routed; a secondary PSAP only receives calls that have been transferred, or passed along, from a primary PSAP. Generally, primary and secondary PSAPs have been included in the total number of PSAPs that need to be capable of receiving wireless E911 information. However, our survey results of state 911 contacts, along with our case study interviews, indicate that some states do not plan to upgrade their secondary PSAPs. For example, in North Carolina, state statute only permits primary PSAPs to be funded for wireless E911; in Kentucky, Virginia, and Washington, state funds to help finance wireless E911 upgrades are only available to primary PSAPs; in Maryland, the issue is currently under discussion, although consolidating secondary PSAPs with primary ones has been considered. In addition, some secondary PSAPs are so small that they may never need wireless E911 equipment. Currently, the DOT/NENA database does not differentiate between PSAPs that will need to be upgraded and those that will not, which limits usefulness of the database in accurately assessing progress toward full wireless E911 implementation. For its part, FCC requires large and midsize wireless carriers that have filed for relief from deployment deadlines to provide information quarterly on their progress in implementing Phase I and Phase II. Until recently, the data submitted by the carriers and available from FCC were organized by carrier, not by state or county, and were not easily sorted to provide information concerning the status of wireless E911 deployment. However, as of August 1, 2003, FCC also began requiring the large and midsize wireless carriers to submit data in an electronic spreadsheet format regarding deployment of Phase I and Phase II by PSAP. Because this spreadsheet has several fields, including the state, researchers can search by field and have numerous options for organizing the data. In addition, small wireless carriers, which had also requested relief, also were required to file one interim report with FCC about their E911 progress on August 1, 2003. Based on the August filings, FCC told us that most of the large and midsize carriers appear to be making good progress toward readying their networks to respond to PSAP requests for E911 services. In our survey of state 911 contacts (which included the District of Columbia), we asked respondents to provide us with an estimate of when they believed their state would have wireless Phase II E911 fully in place for at least one wireless carrier per PSAP. Twenty-four of 51 respondents said they thought Phase II would be fully in place in their state by 2005, the last year for which there is any specific FCC deadline on wireless carriers. Six of those 24 respondents said they would be ready by 2003. Contacts in other states were either unwilling to commit to any specific year, given their current level of implementation, or estimated a date in 2006 or beyond. See figure 4. As the estimates from state contacts indicate, no clear picture is emerging on when Phase II will be fully deployed nationwide, raising the prospect of piecemeal availability of this service across the country for an indefinite number of years to come. As of October 2003, NENA estimates that over the next 5 years the nationwide cost to deploy Phase II will be between $8 billion and $9 billion, including capital and incremental operating expenses. Funding for PSAP equipment upgrades remains a major issue for many states and localities and continues to hamper nationwide deployment. Not all states have implemented a funding mechanism for wireless E911, and of those that have, some have redirected E911 funds to unrelated uses. In addition, poor coordination among the parties is a factor affecting wireless E911 deployment, although some states and localities have eased this problem with active and knowledgeable state 911 coordinators who help oversee the process and work with all the parties. Technologically, the main hurdle of developing wireless location equipment for mobile phones has been solved, but the continuing emergence of new wireless devices and services has the potential to overburden the current 911 infrastructure. It is costly to implement wireless E911 services. PSAPs need money to upgrade their systems and equipment and to purchase new software to receive and display caller location information. Wireless carriers incur costs associated with handset and network upgrades, engineering design, upgrading hardware and software, and maintaining the system. The LECs also incur costs, but generally these are paid for by the PSAPs as they purchase 911 services and upgrades from the LECs. Currently, funding must come from sources other than the federal government, which has not provided funding to PSAPs or wireless carriers for wireless E911 or established guidelines on how wireless E911 should be funded. At present, it is up to state and local governments to determine how to pay for PSAP wireless E911 upgrades. To cover the costs associated with implementing wireless E911, responses to our survey showed that the majority of states (39 states plus the District of Columbia) require wireless carriers to collect funds from their subscribers through a surcharge included on subscribers’ monthly wireless phone bills. The amount of the surcharge is usually determined by the state; responses to our survey showed the surcharges ranged from 5 cents to $1.50 per month. Generally, the wireless carriers submit the funds to the states, and the states have the discretion to determine how the funds will be managed. For example, some states have established E911 boards that oversee the funds, while other states allow the funds to be managed at the county or PSAP level. Methods of disbursement also varied. Some states allocated wireless E911 funds to PSAPs based on their jurisdictional population, while some based it on the number of wireless subscribers in the jurisdiction. Other states evenly divided the funds among counties or PSAPs. Although the majority of states have established some type of funding mechanism, problems with funding PSAP equipment upgrades persist. For example, NENA maintains that many communities are not in a position to implement wireless E911 service because funds collected for E911 deployment are not being allocated for that purpose. Our survey of state E911 contacts found that 13 states and the District of Columbia had used wireless E911 funds for expenditures unrelated to wireless E911 implementation, and 9 other states had attempted to do so. For example, in one state, more than $40 million was taken from the E911 fund for unrelated purposes, and an additional $25 million is expected to be taken in 2004. The state contact said that if the redirection of funds continues, it would bring E911 upgrades to a halt. Another state E911 contact told us that the use of some E911 funds for other purposes had hindered the ability of PSAPs to purchase necessary computer upgrades and mapping software. In another state, funds had not been redirected to other purposes, but the E911 funds were “frozen” by the state’s legislature and could not be used by the PSAPs to implement Phase II. The state E911 coordinator told us that the state’s E911 fund had sufficient monies to implement Phase II statewide, but many PSAPs could not move forward until the state’s legislature allocated funds for E911 initiatives, and it was unclear when or if that would occur. In addition to the redirection of E911 funds, our survey of state contacts found that eight states have never instituted a statewide system for collecting funds for wireless E911 purposes. In one state, for example, any fee or tax proposed to be placed on the public must be approved by the state’s voters, and legislation creating an E911 funding mechanism did not receive voter approval. The state’s E911 contact told us that the proposed legislation would have generated sufficient funds for deploying wireless E911 statewide, but without the funding, most counties in the state will not have Phase II implemented by 2005. Some of the other eight states have experienced opposition to E911 funding because it is perceived as a tax; another state has not addressed the issue of wireless E911 implementation at all. Another funding issue raised by survey respondents and by others we interviewed was that rural PSAPs in particular face funding problems for E911. For example, some states allocate funds to the PSAPs based on their jurisdictional population, which may cause PSAPs serving small or rural communities in those states to receive insufficient funds to implement E911. While many of the costs involved in purchasing upgraded equipment and mapping software are similar for PSAPs serving large and small communities, PSAPs that receive fewer E911 funds because of their smaller population base may not have adequate funds to purchase the necessary equipment and software. Two wireless carriers told us that numerous PSAPs they serve had either withdrawn or suspended their request to wireless carriers for Phase II service because of funding constraints. Wireless carriers also incur various costs to implement E911. For example, two wireless carriers told us they had spent about $50 million each to date to deploy E911, and three others said their costs would exceed $100 million each. Several of the small wireless carriers we interviewed in our case studies said that funding E911 technologies is particularly difficult for them because of their limited revenues and that raising their rates would risk their competitiveness in the market. While FCC requires wireless carriers to implement E911, the Commission has not mandated as a prerequisite to implementation that the carriers be reimbursed for their E911 expenses.Although responses to our survey showed that 32 states and the District of Columbia allow wireless carriers to recover their E911 costs from the state funding mechanism, state E911 contacts sometimes reported that it might be difficult for the carriers to recoup all of their E911 costs. For example, some states only allow the wireless carriers to be reimbursed if funds were appropriated for that purpose, and other states told us that only certain wireless carrier expenditures could be reimbursed. The wireless carriers we contacted said it was unlikely that all of their costs would be fully recovered, especially since cost recovery mechanisms are not available in all states. One wireless carrier told us that in some states, the E911 surcharges imposed on customers do not generate sufficient revenue to pay for both PSAP and carrier costs incurred in E911 deployment. Another wireless carrier said that some states make it so difficult for the wireless carrier to recover its costs that the carrier will not even attempt to get funds from those states. Since it is unlikely that all E911 implementation costs can be recovered through the states, several of the wireless carriers we contacted have chosen to charge their subscribers an additional monthly fee to help pay for E911 costs. As noted earlier, the deployment of wireless E911 systems requires wireless carriers, LECs, and PSAPs to work together in distinct yet interdependent roles. However, according to some contacts we interviewed, delays sometimes occur because the various parties have difficulty coordinating their activities or working together. There was no consistency across the interviews as to which party (or parties)—wireless carriers, LECs, or PSAPs—was most hindering wireless E911 deployment. The difficulties in coordination between the parties at times caused frustration, according to some contacts we interviewed. For example, representatives from two of the PSAPs we contacted noted that just determining the number of wireless carriers providing service in their PSAP’s jurisdiction can be difficult. One PSAP administrator told us that in order to get a complete list of providers before sending out his request letters for Phase I, a PSAP employee drove around the county to identify the cell tower owners and contacted them to obtain the names of the wireless carriers leasing space on the towers. The PSAP administrator noted as well that tracking down the right contact person at the wireless carrier was difficult. In another example, representatives from several wireless carriers said that some PSAPs had requested E911 service from the wireless carriers even though the PSAPs’ call centers were not yet ready to receive caller location information because the proper equipment had not yet been installed. This might occur because some PSAPs fail to understand what is required of them technologically and what tasks they need to complete prior to requesting E911 service. Traditionally, PSAP administrators have focused on public safety and emergency response, not telecommunications. The complexity of implementing wireless E911, however, has forced PSAP administrators to become telecommunications project managers and to learn about the technology involved. We also were told that LECs have contributed to implementation delays. One PSAP representative told us that difficulties encountered with the LEC were a major obstacle to implementing wireless E911 and that the LEC delayed installing lines necessary for wireless E911 for 4 months, which greatly slowed the process. Because of continuing problems with the LEC in this location, the PSAP purchased its own call routing equipment. Similarly, another PSAP representative told us the main obstacle they faced in implementing E911 was working with the LEC. The PSAP representative noted that no one contemplated the role the LEC would play in the implementation of E911 and that this has led to problems and delays. A number of stakeholders we interviewed believed that FCC needs to be more involved with the LECs to ensure they are an active player in wireless E911 implementation. For example, an official representing a public safety association stated that FCC should closely monitor the role that the LECs play in wireless E911 implementation and should employ its oversight role to facilitate corrective action to expedite wireless E911 compliance. Several of those we interviewed in our case studies suggested that FCC take on greater enforcement of the LEC role in E911 implementation, and perhaps consider placing deadlines on LECs to respond to PSAP requests for E911 upgrades. According to FCC, the Commission does not have clear jurisdiction over wireline carriers with regard to wireless E911 implementation, and the Commission looks to the state public utility commissions, which have clear and sufficient authority to take the lead. However, FCC has indicated that it is committed to monitoring the LECs’ implementation role to ensure that they are meeting their responsibilities with regard to E911 deployment. In response to these problems with coordination, many industry representatives and affected parties we contacted noted that a strong, knowledgeable state E911 coordinator was the key to helping to coordinate the parties and successfully implement wireless E911 services within the state. Many believed that those states with strong state E911 coordinators had made the most progress with wireless E911 implementation. These state coordinators perform tasks such as educating PSAPs about their wireless E911 responsibilities, providing technical assistance to PSAPs, bringing all parties together early on to discuss implementation issues and providing a single point of contact for all the parties, and lobbying for E911 funding and protecting the funding from being used for purposes unrelated to wireless E911 implementation. Besides voicing support for effective state coordinators, those we interviewed provided several illustrations of actions their states were taking to facilitate wireless E911 implementation: Several parties we spoke with mentioned that they had had a conference call or meeting early on between the wireless carrier, LEC, and PSAP to talk through the process and try to identify problems. Kentucky requires all PSAPs to go through a certification process with the state board to ensure preparedness for both wireline and wireless E911 implementation. This certification process was created to establish an overall uniformity for the state’s PSAPs. By using a checklist for upgrades and an inspection process, Kentucky expects all of its PSAPs that go through the certification process will be Phase II operational by January 2005. California purchases equipment at the state level to create advantages in negotiating contracts with vendors and to create economies of scale in equipment purchases. Indiana has an elected official in charge of funding, which provides for greater visibility of the E911 issue in the state and helps protect against redirection of E911 funds to other uses. Virginia contracts with several technical consulting firms for wireless E911 implementation. The PSAPs are allowed to use contractors from this pool and can use the wireless E911 funding they receive from the state to pay for contractors’ services. This arrangement provides needed technical assistance for PSAPs while allowing greater oversight of the contractors. During our interviews, we were told that the basic technology for accurately determining the location of a wireless caller and systematically providing that data to PSAPs has now been developed. Some noted that although occasional problems still arise due to a particular wireless carrier/LEC/PSAP equipment configuration, these problems are lessening as the parties gain experience with E911 implementation. A representative of one LEC noted that the “challenging years” of coordinating interconnection between the LEC and the wireless carrier seem to be behind them and that implementation now generally tends to proceed more smoothly. We asked the officials we interviewed what they saw as the remaining technical issues affecting wireless E911 implementation. Several parties mentioned a variety of technical problems that might slow wireless E911 implementation or affect the quality of 911 services in general. Problems that were mentioned include the following: Because the United States never adopted a single standard for mobile phone transmissions, the different systems used by wireless carriers are not always compatible with one another, which can affect the ability of a particular subscriber to reach 911 in the first place if they do not have a phone that can be used with multiple systems. While GPS can provide more accurate location data, concerns exist over the time it takes for location data to be calculated and delivered to the PSAP. In the context of an emergency call, even a wait of 10 or 20 seconds for the location data to be processed is considered a loss of valuable time. For rural wireless carriers that have selected a network-based solution, cell towers often are placed in a straight line and spaced widely apart along highways or other roads. This can make the determination of location difficult because the towers cannot accurately triangulate the location of the caller. Additionally, the handset-based solution may not be immediately available due to equipment issues. Another problem was raised by some of those we interviewed: the antiquated wireline 911 infrastructure that conveys many E911 calls from the wireless carrier to the PSAP. This issue was also raised by Dale Hatfield, former chief of FCC’s Office of Engineering and Technology. In 2001, FCC asked Mr. Hatfield to conduct an inquiry into the technical and operational issues associated with wireless E911 deployment. His October 2002 report to FCC noted that the wireline 911 network is fundamentally unchanged since its inception in the 1970s and that the existing 911 infrastructure “is in no condition to accommodate the pervasive use of wireless technologies, the Internet, or the many other product offerings that invite or demand access to 9-1-1 services.” Those offerings include new wireless technologies that could send E911 calls (e.g., automatic crash notification systems on cars that would also be able to send information to the 911 call taker about whether air bags have deployed or whether the car has flipped over), and the 911 services may need to be expanded to encompass such technologies. Many of those with whom we spoke believed that such new technologies should be considered now, rather than later. Some were critical of the LECs’ failures to upgrade to modern digital technologies that would facilitate the rollout of wireless E911 technologies and improve 911 services. FCC released a notice of proposed rulemaking to reevaluate the scope of communications services that should provide access to 911 and has received comments and reply comments from interested parties. NENA is also trying to address the issue of new technologies and of a “future path plan” for the 911 network. FCC and DOT have been involved in the implementation of wireless E911, but federal authority in overseeing the deployment is limited because of the traditional state and local jurisdiction over emergency response services. The primary federal agency involved in wireless E911 deployment is FCC. One of FCC’s goals is to ensure the wireless carriers comply with their current implementation schedules. As noted earlier, FCC in the past had granted waivers to many of the wireless carriers in order to give them more time to resolve technical issues associated with developing wireless location technologies. Because many of these hurdles have now been overcome, FCC has stated that it will not hesitate to use its enforcement power when the wireless carriers fail to meet their current deployment timetables. For example, FCC officials noted that three wireless carriers agreed to pay nearly $4 million to the U.S. Treasury for failure to comply with intermediate deadlines in their E911 deployment timetables. Beyond enforcing deadlines on wireless carriers, FCC has taken actions to identify both roadblocks and best practices in wireless E911 implementation. For example, the Hatfield report made a number of findings regarding obstacles to wireless E911 implementation. Those findings involve wireless carrier implementation issues, cost recovery and PSAP funding issues, and the lack of comprehensive stakeholder coordination. Public comment was sought on the report in late 2002 and, according to FCC, the Commission is currently considering both the recommendations contained in the report and the comments received.FCC also conducted its first Enhanced 911 Coordination Initiative meeting in April 2003. The meeting brought together representatives from the federal government, the public safety community, wireless carriers, LECs, and other interested stakeholders to share experiences and devise strategies for expediting wireless E911 deployment. According to FCC, lessons learned from the initiative include the following: Strong leadership and vision are essential to ensure swift wireless E911 deployment. State or regional points of contact are critical for prompt wireless carrier deployment. Wireless E911 in rural areas may pose additional challenges such as financial hurdles and accuracy concerns. Additionally, in August 2003, FCC announced the establishment of a wireless E911 technical group to focus on network architecture and technical standards issues. The group will be a subcommittee of the Commission’s Network Reliability and Interoperability Council. Also in August 2003, FCC announced a wireless E911 public awareness campaign emphasizing coordination, outreach, and education. One of the first outcomes of the campaign was an FCC advisory published for consumers providing information on what people need to know about calling 911 from a mobile phone. A copy of this consumer advisory is found in appendix II of this report. DOT also has efforts under way to promote wireless E911 implementation, focusing on implementation issues at the state and local level. DOT partnered with NENA to develop a Wireless Implementation Plan. One major aspect of this plan is the creation of a clearinghouse of wireless E911 planning, implementation, and operations resources. The clearinghouse is an attempt to gather and organize the best examples of information from various states, work groups, and ongoing development efforts. The clearinghouse also includes various forms used by parties across the nation in implementing E911 agreements. As discussed earlier, another major component of DOT’s efforts is the sponsorship of a PSAP database (under contract with NENA) that tracks the current status of wireless E911 implementation across the country. DOT also convened a Wireless E911 Steering Council to develop a Priority Action Plan, released in May 2003, that outlines six priorities for wireless E911 implementation: 1. Establish support for statewide coordination of wireless E911 technology, and identify points of contact within each state for each of the stakeholders. 2. Help to convene stakeholders in appropriate 911 regions in order to facilitate more comprehensive, coordinated implementation of wireless location technologies. 3. Examine cost recovery and funding issues at the state level. 4. Initiate a knowledge transfer and outreach program to educate PSAPs, wireless carriers, and the public about wireless location issues. 5. Develop a coordinated deployment strategy encompassing both rural and urban areas. 6. Implement a “model location program” to identify and isolate potential barriers to wireless E911 deployment. Work on implementing this plan was in its early stages at the time we concluded our review. However, DOT had subdivided each priority into a number of action items, identified lead agencies or associations for each action item, and established a time frame for completion of each action item. FCC and DOT staff told us that the agencies coordinate their wireless E911 activities to avoid duplication of effort. An FCC representative attends DOT meetings and events on wireless E911 to stay current with the department’s activities; similarly, a DOT representative attends FCC meetings and initiatives on wireless E911. DOT officials noted that their efforts have been concentrated on providing assistance at the PSAP level since FCC has authority over the wireless carriers and LECs. While the agencies do not currently jointly staff or fund any wireless E911 projects, FCC officials noted that more formalized coordination is possible in the future. Without the readiness of all parties—wireless carriers, LECs, and PSAPs— there can be no wireless E911 service. Efforts by FCC to monitor the progress of the wireless carriers in meeting their timetables and take enforcement actions, as warranted, will continue to be an important part of the implementation process. Still, given current E911 funding and coordination problems related to upgrading PSAPs at state and local levels, the pace of wireless E911 deployment could be similar to what happened with wireline E911, which took many years to implement nationwide. If this holds true, consumers and emergency management officials will be faced with a geographic patchwork of wireless E911 areas: Some will have service; some will not. As Americans travel across the country, they will be uncertain as to whether their 911 calls will convey their location. However, successful wireless E911 deployment is possible, as illustrated in some areas of the country. States and localities can benefit from the experiences and best practices of others and adapt them to their own situations. Continued efforts by the FCC, DOT, and the public safety community to identify and publicize these successes will be a valuable means of facilitating the deployment. During this transition period, it is important to accurately measure progress in wireless E911 deployment so that federal, state, and local officials can assess whether problems are arising in parts of the country that may require additional actions. This information would also help build public awareness of where this service is available and may stimulate action at the state and local level. Measuring the progress of wireless E911 implementation against the goal of full nationwide Phase II deployment depends on being able to compare the number of PSAPs that are receiving wireless Phase II location data with the total number of PSAPs that need to be upgraded. We found, however, that there is a lack of information on the total number of PSAPs that need to be upgraded. While FCC and DOT have taken important actions to track wireless E911 deployment, additional work is needed to create reliable data on how many of the more than 6,000 PSAPs will need to be upgraded. In order to provide the Congress and federal and state officials with an accurate assessment of the progress being made toward the goal of full deployment of wireless E911, we recommend that the Department of Transportation work with state-level E911 officials, the National Emergency Number Association, and other public safety groups to determine which public safety answering points will need to have their equipment upgraded. This information should then be reflected in the PSAP database managed by NENA under contract with DOT. This will provide the baseline needed to measure progress toward the goal of full nationwide deployment of wireless E911 service. We provided a draft of this report to DOT and FCC for review and comment. DOT stated that it generally agreed with our recommendation, and FCC offered some technical comments that we incorporated into the report where appropriate. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 14 days after the date of this letter. At that time, we will send copies to interested congressional committees; the Chairman, FCC; the Secretary, Department of Transportation; and other interested parties. We also will make copies available to others upon request. In addition, this report will be available at no cost on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512-6670 or [email protected]. Key contacts and major contributors to this report are listed in appendix III. To provide information on the progress made in deploying wireless E911 services throughout the country, we conducted a telephone survey of the state E911 contacts. We completed surveys for 50 states and the District of Columbia. We pretested the questions with five state contacts from states we had spoken with earlier in our research. We revised the survey as appropriate based on responses during pretesting. For each state and the District of Columbia, we began by contacting the person named on the FCC’s Web site at http://www.fcc.gov/911/stateplans/contacts.html as the point of contact for that state.In 25 states, the person named on FCC’s Web site did complete the survey. In the remainder of our surveys, we were directed to another person. The survey contained 17 questions about the state’s progress in implementing Phase I and Phase II, problems encountered, funding mechanisms in place, and the role of the state coordinator or any state offices involved in wireless E911 implementation. The questions were open-ended and were read to the respondents. Surveys were completed between June 11 and September 12, 2003. In addition to our survey results, we used data from the National Emergency Number Association (NENA) to illustrate the progress of wireless E911 implementation as of October 2003. To assess the reliability of NENA’s data regarding information on total costs to upgrade PSAPs to Phase II readiness and the number of PSAPs receiving Phase II data as of the August 1, 2003, FCC quarterly filings, we interviewed knowledgeable officials from NENA about their data collection methods and reviewed any existing documentation relating to the data sources. We determined that the data were reliable enough for the purposes of this report. To provide information on the factors affecting wireless E911 rollouts across the country, we selected nine states (California, Idaho, Indiana, Kentucky, Maryland, Missouri, South Carolina, Texas, and Virginia) and the District of Columbia for case studies. We selected states that were spread geographically across the U.S. and that appeared to be having various levels of success with wireless E911 implementation based on early research. In particular, we selected at least one rural state and at least one state known to have redirected funds collected for E911 implementation to other uses. For each case study, we interviewed (in person or by telephone) the state coordinator, a small wireless carrier serving that state, and one urban PSAP and one rural PSAP within the state. In addition to our case studies, we interviewed representatives from four public safety associations and two wireless industry associations. We interviewed representatives from five large national wireless carriers and received written responses to our questions from a sixth large national wireless carrier. We also interviewed representatives from six local exchange carriers and one manufacturer of mobile phones. To provide information on current federal government actions to promote the deployment of wireless E911 services, we spoke with officials at FCC and DOT about their involvement in wireless E911 implementation. We reviewed relevant orders, filings, and other materials from FCC docket number 94-102 on E911 implementation. We researched relevant materials from both FCC and DOT, such as DOT’s Priority Action Plan. We attended FCC’s daylong Enhanced 911 Coordination Initiative in April 2003. Statistics presented in the first paragraph of the report are from the Cellular Telecommunication & Internet Association, unless otherwise noted. Statistics presented in the first paragraph of the background section are from NENA. All of these statistics are presented for background purposes and were not verified by GAO. We conducted our review from January 2003 through October 2003 in accordance with generally accepted government auditing standards. Among other responsibilities, FCC’s Consumer & Governmental Affairs Bureau educates and informs consumers about telecommunications services. To this end, the Bureau has produced a number of consumer alerts and fact sheets. Among these is a new consumer advisory entitled “What You Need to Know about Calling 911 from Your Wireless Phone.” This consumer advisory is reprinted on the following pages and can be accessed at FCC’s Web site at www.fcc.gov/cgb/consumerfacts/e911.html. In addition to those named above, Michele Fejfar, Deepa Ghosh, Sally Moino, Mindi Weisenbloom, Alwynne Wilbur, and Nancy Zearfoss made key contributions to this report. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
When an emergency call is placed to 911, prompt response depends on knowing the location of the caller. Enhanced 911 (E911) service automatically provides this critical information. E911 is in place in most of the country for traditional wireline telephone service, where the telephone number is linked to a street address. Expanding E911 capabilities to mobile phones is inherently more challenging because of the need to determine the caller's geographic location at the moment the call is made. Concerns have been raised about the pace of wireless E911 implementation and whether this service will be available nationwide. GAO reviewed the progress being made in implementing wireless E911 service, the factors affecting this progress, and the role of the federal government in facilitating the nationwide deployment of wireless E911 service. Implementation of wireless E911 is several years away in many states, raising the prospect of piecemeal availability of this service across the country for an indefinite number of years to come. Successful implementation depends on coordinated efforts by wireless carriers, local telephone companies, and more than 6,000 public safety answering points (PSAPs)--the facilities that receive 911 calls and dispatch assistance. According to a database sponsored by the Department of Transportation (DOT), as of October 2003, nearly 65 percent of PSAPs had Phase I wireless E911 service, which provides the approximate location of the caller, while only 18 percent had Phase II, which provides a more precise location and is the ultimate goal of wireless E911 service. Though valuable, the database does not differentiate between PSAPs that will require equipment upgrades and those that will not, thereby limiting its usefulness in accurately assessing progress toward full implementation. Looking forward, 24 state 911 contacts said in response to a GAO survey that their state will have Phase II implemented by 2005 or sooner; however, all other state contacts estimated dates beyond 2005 or were unable to estimate a date. Key factors hindering wireless E911 implementation involve funding and coordination. The wireless carriers, states, and localities must devise the means to fund more than $8 billion in estimated deployment costs over the next 5 years. Some states and localities have established funding mechanisms (such as E911 surcharges on phone bills), but others have not done so or have used their E911 funds for unrelated purposes. In addition, there is also a lack of coordination in some cases among the wireless carriers, local telephone companies, and PSAPs that can lead to delays in wireless E911 implementation. States with knowledgeable and involved coordinators were best able to work through these coordination issues. The Federal Communications Commission (FCC) and DOT are involved in promoting wireless E911, but their authority in overseeing its deployment is limited because PSAPs traditionally fall under state and local jurisdiction. FCC has set deadlines on the wireless carriers' E911 responsibilities and has taken actions to identify best practices and improve coordination among the parties. DOT is developing an action plan and clearinghouse for wireless E911 planning, implementation, and operations.
USPS is an independent establishment of the executive branch mandated to provide postal services to bind the nation together through the personal, educational, literary, and business correspondence of the people. Established by the Postal Reorganization Act of 1970, USPS is one of the largest organizations in the nation, with annual revenues of about $67 billion in fiscal year 2002 and a workforce of about 850,000 full-time and part-time employees. To fulfill its responsibilities, USPS has a massive infrastructure that, in fiscal year 2002, included about 300,000 collection boxes; 209,000 vehicles that transport and deliver mail; almost 38,000 post offices, post office stations, and post office branches; and about 350 mail processing facilities that sort and route mail across the country and within local areas. USPS delivered mail to the nation’s 139 million addresses, a number that grows by about 1.7 million annually. USPS carried over 40 percent of the world’s mail, and USPS’s total mail volume was nearly 203 billion pieces in fiscal year 2002. A simplified illustration of how USPS handles a single piece of personal correspondence that is mailed cross-country (referred to as “Aunt Minnie” mail) is shown in figure 1. USPS handles a wide variety of mail items ranging from correspondence, bills, and publications to payments and packages. Most mail is generated by businesses, with households generating 11 percent of domestic mail volume—primarily remittance mail and other mail sent to businesses and other organizations. Household-to-household mail, such as personal correspondence, represents only 4 percent of domestic mail volume. Postage rates vary widely, depending on the mail’s content, weight, size, destination, and how it is prepared and presented by mailers to USPS, among other things. Mail is organized into groupings called classes. The four main mail classes include (1) First-Class Mail, which includes items such as business and personal correspondence, bills, payments, and advertisements; (2) Standard Mail, which is primarily advertising mail such as catalogs, coupons, and solicitations; (3) Periodicals, which include publications such as mailed newspapers and magazines; and (4) Package Services, which is primarily packages that include merchandise as well as large quantities of printed material. The Postal Reorganization Act of 1970 shifted postage ratemaking authority from Congress to two presidentially appointed bodies: the USPS Board of Governors and the independent Postal Rate Commission (PRC). The Board of Governors is USPS’s governing body, which, among other things, sets policy, directs and controls expenditures, and participates in establishing postage rates and fees. The Board consists of 11 members: (1) 9 Governors who are appointed by the President, with the advice and consent of the Senate, to 9-year staggered terms; (2) the Postmaster General, who is appointed by the Governors; and (3) the Deputy Postmaster General, who is appointed by the Governors and the Postmaster General. By law, Governors are chosen to represent the public interest and cannot be representatives of special interests. They serve part time and may be removed only for cause. Not more than five of the nine Governors may belong to the same political party. No other qualifications or restrictions are specified in law. PRC is an independent establishment of the executive branch that is composed of five full-time Commissioners, who are appointed by the President, with the advice and consent of the Senate, to 6-year staggered terms. Among other things, PRC Commissioners review proposed changes to domestic postage rates and fees and appeals of USPS decisions to close post offices. By law, Commissioners shall be chosen on the basis of their professional qualifications and may be removed only for cause. Not more than three of the five Commissioners may belong to the same political party. No other qualifications or restrictions are specified in law. In addition to the five Commissioners, PRC has a staff of about 40 full-time employees. When USPS wishes to change domestic postage rates and fees, it must submit its proposed changes and supporting material to PRC, which generally must render its recommended decision within 10 months. During that time, interested parties, such as mailer groups, individual mailers, companies that provide mail-related services, USPS competitors, postal labor unions, PRC’s Office of the Consumer Advocate, and members of the public, have the opportunity to provide evidence and comments to PRC reflecting their respective concerns. PRC also generally holds public hearings before issuing its recommended decision to the Governors, who may approve, allow under protest, reject, or modify PRC’s decision. USPS has a break-even mandate. Thus, when USPS proposes changes to domestic postage rates and fees, USPS (1) projects its “revenue requirement” for the “test year” (a fiscal year representative of the period of time when the new rates will go into effect), based on the total estimated costs plus a provision for contingencies, and a provision for the recovery of prior years’ losses; and (2) proposes rates and fees that are estimated to raise sufficient revenues to meet USPS’s revenue requirement. USPS also proposes domestic postage rates and fees that are intended to fulfill the requirement in law that each class of mail or type of service must cover the direct and indirect postal costs that are attributable to that class or type of service plus a portion of its other remaining “institutional” costs, which include all “common” or “overhead” costs. USPS has raised postage rates several times in recent years. Although these rate increases have contributed to the decline in mail volume, USPS credits the rate increases with adding billions of dollars to USPS revenues. USPS now plans to keep postage rates steady until 2006, largely because recently enacted legislation has reduced USPS’s payments for its pension obligations. Although USPS’s short-term financial pressures have been alleviated, fundamental issues remain associated with USPS’s business model, which relies on mail volume growth to help finance rising costs, including the cost of universal postal service provided through an expanding delivery network. USPS has recognized that its business model is outmoded in today’s rapidly changing and increasingly competitive business environment. As growth in mail volume has stagnated or declined, USPS has increasingly relied on rate increases to generate additional revenues. Congress has debated proposals for comprehensive legislation to address postal transformation issues for the past decade, including USPS’s mission, role, business model, and regulation of postage rates. None of these proposals have been enacted to date. When legislative action was not forthcoming, various postal stakeholders and we proposed that a presidential commission be formed to consider postal transformation issues and develop recommendations. In April 2001, we put USPS’s long-term financial outlook and transformation efforts on our high-risk list and recommended that USPS develop a comprehensive plan to address its financial, operational, and human capital challenges. In the fall of 2001, USPS's financial situation became even more complex and critical due to the events of September 11th and the subsequent use of the mail to transmit anthrax. These events, the economic downturn, electronic diversion of mail, and rate increases, among other things, have led to unprecedented declines in total mail volume and continuing declines in the volume of First-Class Mail. This mail class generates more than half of USPS’s revenues and covers most of its institutional costs. USPS issued its Transformation Plan in April 2002 and has begun to implement it. USPS’s actions are useful but cannot resolve the fundamental and systemic challenges associated with USPS’s current business model. These challenges threaten USPS's ability to carry out its mission of providing affordable, high-quality, universal postal services on a self- financing basis. Given these challenges, on December 11, 2002, President Bush issued an executive order that established the President’s Commission on the United States Postal Service. The executive order stated that the commission’s mission shall be to examine the state of USPS and submit a report to the President by July 31, 2003, that articulates a proposed vision for USPS, along with recommendations for the legislative and administrative reforms needed to ensure the viability of postal services. The commission examined many issues that are critical to postal transformation, including the worksharing of mail. The commission held seven public hearings during which it received testimony and statements for the record from a wide variety of stakeholders, including USPS, PRC, postal labor unions and management associations, mailers, mailer groups, companies that provide mail-related products and services, USPS competitors, subject matter experts, and others. Postal worksharing activities generally involve mailers preparing, barcoding, sorting, or transporting mail to qualify for reduced postage rates, that is, worksharing rates. Worksharing rates are based on what are commonly referred to as worksharing discounts because the rates are reduced based on the costs that USPS is estimated to avoid as a result of mailer worksharing activities. Key worksharing activities include (1) barcoding and preparing mail so it can be sorted by USPS automated equipment, which reduces manual sorting and other USPS handling of the mail; (2) presorting mail, such as by ZIP Code or specific delivery location, to reduce the number of times USPS must sort the mail to route it to the addressee; and (3) entering mail at a USPS facility that is generally closer to the final destination of the mail, which is commonly referred as entering the mail deeper into USPS’s network used to move the mail. In addition, mailers must perform numerous other worksharing activities, such as updating and properly formatting addresses to improve their quality and accuracy, thus reducing the amount of undeliverable and forwarded mail, as well as improving USPS’s ability to use its automated equipment to sort the mail. To qualify for worksharing rates, mailers must perform worksharing activities and meet minimum volume requirements for bulk mailings, such as mailings of at least 500 letters sent via First-Class Mail that may include credit card bills, utility bills, advertisements, and bank statements. Aside from First-Class Mail that is workshared, other workshared mail may include bulk mailings of advertisements, magazines, local newsletters, or packages. Three key worksharing activities performed by mailers are applying barcodes to mail and preparing it so that the mail can be sorted by USPS automated equipment; presorting mail, such as by ZIP Code or specific delivery location; and entering mail at a USPS facility that is generally closer to the final destination of the mail. Mailers must also perform numerous other worksharing activities. Specifically, worksharing activities include the following: Applying barcodes: USPS automation equipment relies heavily on barcodes to sort mail. Barcodes provide machine-readable ZIP Code and delivery point information. When mailers apply barcodes (see fig. 2) and prepare the mail so it is compatible with USPS automated equipment, USPS avoids applying barcodes or sorting the mail manually. Mailer- barcoded mail can go directly to USPS automated equipment for processing. Sorting mail: Mailers who sort their mail, such as by groupings of ZIP Codes, five-digit ZIP Codes, or specific delivery locations; place their mail in mail trays; and then take their mail to a USPS facility for processing save USPS money by reducing the number of times USPS has to sort the mail to route it to its final destination. Such mailer sorting is called “presorting” because it occurs before USPS receives the mail. Figure 3 illustrates an example of mail sorted by five-digit ZIP Codes. Destination entry of mail: Mailers can prepare and transport some mail, such as advertisements, periodicals, and packages, from where the mail is generated to USPS facilities that generally are closer to where the mail will be delivered. Destination entry mail also must meet other worksharing requirements, such as being presorted to qualify for a lower “destination entry” rate that is discounted from the rate for mail that is not destination-entered. When destination entry mail meets the worksharing requirements, it is generally expected to (1) bypass the originating USPS mail processing facilities that initially receive and organize mail according to areas where it will be delivered; and (2) be transported by the mailers to USPS’s facilities that generally are closer to the final destination of the mail, including USPS’s mail processing and delivery unit facilities where carriers pick up their mail for delivery (e.g., post offices). When destination entry mail is transported by mailers to USPS mail processing facilities, USPS processes the mail, such as sorting the mail, and then transports the mail to a destination delivery unit for delivery. In addition, mailers can receive even lower destination entry rates when they transport destination entry mail to USPS delivery unit facilities. For this mail, USPS is generally expected to avoid handling this mail at its mail processing facilities and then transporting it to its delivery unit facilities. Figure 4 provides a simplified illustration of how USPS handles bulk quantities of destination-entered mail sent from Philadelphia to Los Angeles, compared with how USPS handles a single letter sent by an individual (“Aunt Minnie”) via First-Class Mail. Worksharing rates are based on what are commonly referred to as “worksharing discounts.” For example, for First-Class Mail, the worksharing discounts for workshared mail refer to the difference between the rates for single-piece First-Class Mail weighing up to 1 ounce and the corresponding rates applicable to workshared mail. First-Class Mail discounts vary depending on the worksharing activities that are performed and the degree of presorting, among other things. Mailers can barcode and presort bulk mail in exchange for lower worksharing rates when they meet minimum volume requirements for mail sent to specific areas or locations, which reduces the number of times that USPS sorts the mail to route it to these areas or locations. Consider the example of letters weighing up to 1 ounce sent via First-Class Mail that are workshared so that they will be compatible with USPS automation equipment. The mailer worksharing activities performed for these letters include barcoding and presorting, among other things. In this example, which could apply to credit card bills and utility bills, the workshared mail can qualify for different discounts and postage rates depending on the extent of worksharing activities that are performed. Specifically, depending on the degree of presorting of this barcoded mail, it can qualify for varying worksharing discounts, such as discounts of either 7.8 cents, 9.2 cents, or 9.5 cents per piece from the single-piece rate of 37 cents (see table 1). Mailers must fulfill numerous requirements in addition to barcoding and/or presorting to qualify for worksharing rates that apply to automation- compatible mail. These requirements are intended to reduce USPS’s costs of handling mail and can include (1) updating addresses that are intended to reduce the amount of mail that USPS must forward or return to the sender; (2) limiting the maximum weight of each mail piece so workshared mail can be sorted by USPS automated equipment; (3) printing barcodes according to USPS specifications so the barcodes can be read by USPS automated equipment;, and (4) packaging mail, placing mail in trays, labeling trays, and performing other activities to enhance USPS’s ability to efficiently handle the mail. Highlights of worksharing requirements for letters sent via First-Class Mail that qualify for automation-compatible discounts are shown in table 2. Although all mailers of bulk mail can receive worksharing rates when they meet the worksharing requirements, in practice, for-profit businesses generate most workshared mail. For-profit businesses send about three- quarters of domestic mail and frequently send enough large-volume mailings that meet the minimum volume requirements to qualify for worksharing rates. In addition, postage costs can represent a significant cost of doing business, providing an incentive for mailers to qualify for the lowest possible worksharing rates. Businesses typically use worksharing to send bulk mailings, including such mail as bills, statements, periodicals, newsletters, advertisements, and packages. Nonprofit entities such as charitable organizations and associations also generate substantial quantities of workshared mail, such as mailings to raise funds, solicit members, and disseminate information. In fiscal year 2002, nearly three- quarters of domestic mail received worksharing rates (see fig. 5). Most domestic workshared mail is either (1) First-Class Mail, primarily business correspondence, bills, advertisements, and financial statements; or (2) Standard Mail, primarily advertisements, such as catalogs, coupons, flyers, and solicitations (see fig. 6). By way of comparison, most non-workshared mail consists of letters weighing up to 1 ounce sent via First-Class Mail with 37-cent stamps. This mail includes such mail as remittance mail (e.g., checks sent through the mail to pay bills); a variety of business mail (e.g., individual invoices and other business correspondence); and personal correspondence. USPS and PRC have said that worksharing benefits USPS, mailers and the mailing industry, and the nation. First, they credit worksharing with benefiting USPS, in part because it enables USPS to improve its operations and thereby helps minimize its workforce and infrastructure. In addition, they said worksharing benefits USPS because it stimulates mail volume growth, which helps USPS achieve economies of scale. Historically, mail volume growth has been critical to USPS’s business model, which depends on mail volume growth to generate more revenues to help cover rising USPS costs. Second, they credit worksharing with benefiting mailers and the mailing industry. With respect to mailers, USPS and PRC credit worksharing with reducing the total mail-related costs for mailers who workshare, helping to keep postage rates affordable for all mailers; and improving the quality of delivery service. Regarding the mailing industry, USPS and PRC credit worksharing with spurring the development of the direct mail industry, as well as that of other mail-related companies that perform worksharing activities, enabling more mailers to participate in worksharing. Third, they credit worksharing with benefiting the nation, in part by lowering business costs, and in part by creating associated benefits that consumers can realize. They said consumers benefit if worksharing helps keep postage rates affordable; if mailers pass along lower prices when their mail-related costs are reduced by worksharing; if their workshared mail is delivered in a more expeditious and reliable manner; and if the mail volume growth caused by worksharing results in more mail that consumers consider useful, such as business correspondence or catalogs that some consumers find useful. While stakeholders generally support the concept of worksharing, they have raised differing concerns in this area. For example, APWU has asserted that the worksharing discounts are too large and thus worsen USPS’s financial situation. In contrast, some mailers and members of the mailing industry have asserted that the discounts are not large enough and thus improve USPS’s financial situation. Integral to stakeholder differences are divergent views on technical issues relating to the data, assumptions, and analyses used in rate cases to develop the estimates of the costs that USPS is to avoid incurring in the test year as a result of mailer worksharing activities. In this regard, stakeholders have raised issues regarding (1) the quality and accuracy of the estimates of cost avoidance; (2) the extent to which USPS has avoided costs as a result of worksharing activities performed by mailers; and (3) whether data can be generated on what costs USPS has avoided as a result of mailer worksharing activities. We recognize that stakeholders have raised detailed concerns about worksharing relating to technical and policy issues that are beyond the scope of this report. Among other things, we plan to address other stakeholder views on worksharing in our second report. According to USPS and PRC, worksharing benefits USPS by enabling it to improve its operations and help minimize its workforce and infrastructure, and by stimulating mail volume growth. Historically, mail volume growth has been critical to USPS’s business model, which depends on mail volume growth to generate more revenues, which helps cover rising USPS costs and also helps USPS achieve economies of scale. USPS has noted that worksharing improves its financial situation, in part by stimulating mail volume growth, and in part by enabling USPS to operate more efficiently, thereby helping USPS control its costs. USPS has reported that in response to worksharing discounts, mailers performed worksharing activities that reduced USPS’s costs. In addition, USPS reported that worksharing requirements for automation-compatible mail, such as requirements in the areas of address quality and mail preparation, have enabled USPS to make more effective use of its automated equipment, thereby reducing USPS’s costs and improving service times. Further, USPS reported that well-prepared and easy-to-process workshared mail has furthered the cost-effective deployment of additional automated equipment. Specifically, USPS reported that mailer barcoding and presorting of mail help USPS maximize the use of its automated equipment that sorts up to 34,650 letters per hour, avoiding less efficient manual sorting. Also, some workshared mail is presented in mail trays on pallets that can be moved by forklifts, avoiding the need for USPS employees to separately handle each mail tray on the loading dock. USPS has estimated that the worksharing activities performed by mailers, such as barcoding and presorting, will reduce its costs of handling workshared letters that are compatible with its automation equipment and are sent via First-Class Mail. USPS refers to its estimated cost reduction from worksharing activities as “avoided costs.” These avoided costs (see fig. 7) were estimated to result from the reduction in USPS’s costs associated with: manually sorting mail (38 percent of these avoided costs); USPS’s allied labor activities (22 percent), which are activities performed by USPS employees who prepare mail for processing or dispatch, either on the loading dock or inside the mail processing facility; USPS automated operations (20 percent), such as reduced USPS automated sorting of presorted mail; and applying barcodes and performing associated operations on the mail (15 percent). To put the potential for worksharing-related cost savings into context, USPS has reported that if it can change the processing of letters or flat- sized mail (e.g., large envelopes, catalogs, and magazines) from manual processing to automated processing, “there are tremendous savings opportunities.” According to USPS, “while only about 8 percent of the letter mail we receive each day is processed manually, it accounts for one- half of letter mail processing labor costs.” USPS has also estimated that the “labor processing cost” associated with manually handling letters was about $56 per thousand letters, which was about 11 times more costly than for automated processing. Thus, even a 1 percent reduction in the percentage of mail that USPS processes manually can result in significant savings. According to USPS and PRC, worksharing is credited with stimulating mail volume growth over the past three decades, which has helped USPS cover rising costs and achieve economies of scale. USPS has reported that worksharing has been “a primary source of growth” for mail volume, and a PRC staff analysis concluded that mail volume growth was caused by the successive introduction of worksharing rates for different groupings of mail and for different worksharing activities (e.g., mailers barcoding and presorting mail). Over the past three decades, workshared mail has accounted for all of the growth in domestic mail volume. As we have reported, USPS’s business model relies on growth in mail volume to generate revenues to help cover rising costs. Thus, since the Postal Reorganization Act of 1970 was enacted, USPS’s business model has relied on growth in workshared mail volume. The volume of workshared mail increased 365 percent from fiscal years 1972 through 2002, while the volume of non-workshared mail declined 3 percent over the same period (see fig. 8). However, as figure 8 shows, the volume of workshared mail declined in fiscal years 2001 and 2002, a period when USPS incurred growing financial difficulties that included deficits of $1.7 billion and $676 million, respectively; a freeze on most capital investments in USPS facilities; and rising USPS debt. Looking back over USPS’s history, when mail volume has grown, USPS could realize greater economies of scale, because the additional worksharing mail revenues exceeded the marginal costs of delivering the additional volumes of workshared mail. In fiscal year 2002, USPS employed a letter carrier workforce of about 351,000 full-time and part-time employees who serviced a delivery network of 139 million addresses that operated 6 days each week. USPS’s delivery network has considerable fixed costs. For this reason, USPS can become more efficient when the volume of workshared mail increases and USPS realizes the associated economies of scale. Per-piece delivery costs can go down as USPS letter carriers deliver more mail to each address. For example, USPS can deliver mail less expensively, per piece, if a USPS letter carrier delivers a full bag of mail that includes the additional workshared mail volumes rather than a bag of mail that would be partially full if the additional workshared mail volumes were not included. A key reason that worksharing contributed to mail volume growth is that mail volume has been sensitive to mailing costs. When worksharing reduced mailing costs, mailers expanded their use of the mail, such as by sending more catalogs and other advertisements to potential customers. Thus, worksharing helped mail compete with other communication and delivery alternatives. For example, some advertisements can be delivered either as mail or as newspaper inserts, or they can be delivered via other media. Also, packages can be delivered by USPS or private delivery companies such as United Parcel Service or FedEx. The introduction of worksharing rates for First-Class Mail, Standard Mail, and Parcel Post reportedly stimulated growth in their mail volumes. Statistical studies have shown that worksharing discounts resulted in volume growth for these types of mail, in part because price increases were kept smaller than they otherwise would have been. For example, First- Class Mail volume growth increased after the introduction of presorting and barcoding discounts. Further, Standard Mail growth accelerated after the successive introduction of various presorting, barcoding, and destination entry discounts. Standard Mail worksharing rates were the “catalyst for increasing volumes,” according to the PRC staff analysis. Similarly, the introduction of destination entry rates for workshared Parcel Post mail in fiscal year 1991 reinvigorated mail volume growth for Parcel Post. Specifically, Parcel Post volume, which had declined from 498 million pieces in fiscal year 1972 to 128 million pieces in fiscal year 1990, increased to 373 million pieces in fiscal year 2002. Parcel Post became “a much more competitive product,” according to the PRC staff analysis. The growth in Parcel Post volume generated additional USPS revenue, as well as additional contribution to USPS’s institutional costs, even after taking into account USPS’s costs associated with the additional Parcel Post volume. After the introduction of destination entry discounts for Parcel Post, companies called consolidators emerged to collect Parcel Post mail from multiple mailers, sort their mail, and transport it to USPS’s destinating facilities. By combining mail from multiple mailers into larger mailings, these consolidators can qualify the mail for lower worksharing rates. Most Parcel Post items are being entered at destinating mail processing facilities, thus reducing “upstream” USPS handling of the parcels at USPS’s originating mail processing facilities. This has enabled what is often referred to as a partnership between USPS and the private sector to provide the complementary set of activities needed to prepare, barcode, sort, transport, and deliver Parcel Post mail. According to USPS and PRC, the growth in workshared mail volume over the years has generated additional postage revenue to help cover rising USPS costs. These costs include the attributable costs for USPS to process and deliver the mail—that is, the direct and indirect costs that can be attributed to particular groupings of mail—as well as institutional costs, which are costs that are not attributed and are also referred to as common or overhead costs. Institutional costs include fixed costs associated with maintaining a national network of post offices and 6-day delivery of mail and common costs, which are not identified with individual classes of mail. Institutional costs represent more than one-third of all USPS costs and, like attributable costs, have increased over time as the compensation and benefits of USPS employees have increased and other costs have risen, including the costs of financing universally available postal services through an expanding delivery network. As workshared mail volume has grown, it has accounted for a growing share of domestic mail revenues. In fiscal year 2002, workshared mail accounted for 52 percent of USPS domestic mail revenues (see fig. 9). Further, as workshared mail revenues have grown, these revenues have accounted for an increasing proportion of the domestic mail revenues that exceed the attributable costs of domestic mail and thus are applied to help cover USPS institutional costs. In fiscal year 2002, workshared mail accounted for 58 percent of domestic mail revenues that USPS applied to help cover its institutional costs (see fig. 10). First-Class Mail is a particularly important category with respect to USPS institutional costs because it has historically covered most of these costs. Workshared First-Class Mail accounts for a growing proportion of all First- Class Mail volume; revenues; and revenues applied to help cover institutional costs, also referred to as the “contribution to institutional costs.” By fiscal year 2002, workshared First-Class Mail represented 50 percent of First-Class Mail volume, 39 percent of First-Class Mail revenues, and 52 percent of First-Class Mail contribution to institutional costs. Workshared First-Class Mail was slightly more profitable, per piece, than non-workshared First-Class Mail. In fiscal year 2002, USPS data compiled according to PRC methodology showed that the average piece of workshared First-Class Mail accounted for slightly more institutional contribution per piece than non-workshared First-Class Mail (see fig 11). The workshared mail was also less costly per piece for USPS to handle than non-workshared First-Class Mail. For example, some non-workshared mail had handwritten addresses, a portion of which could not be barcoded, necessitating costly manual sorting by USPS employees instead of sorting by USPS automated equipment. Other non-workshared mail had typewritten addresses but could not be sorted by USPS automated equipment for a variety of reasons. For example, typewritten mail cannot be barcoded in some cases if the address is incomplete, such as missing the street directional (e.g., North, South), or a street suffix (e.g., St, Rd, Dr). In total, workshared First-Class Mail accounted for $9.0 billion in contribution to cover institutional costs in fiscal year 2002, compared with $8.4 billion for non-workshared First-Class Mail. Aside from First-Class Mail, Standard Mail—virtually all of which has been workshared— accounted for most domestic mail revenues and most of the contribution to institutional costs. In fiscal year 2002, Standard Mail accounted for $5.1 billion in contribution to institutional costs. According to USPS and PRC, worksharing is also credited with having important effects on USPS’s infrastructure and workforce. USPS and PRC officials have noted that USPS requirements for the preparation of workshared mail furthered USPS investments in automated equipment to handle workshared mail efficiently, which meant that the combination of worksharing and automation helped USPS handle mail in a more efficient manner. For example, increased worksharing incentives were introduced for mailers to barcode letter mail and perform other activities to make it automation compatible when USPS was making major investments in automated equipment that sorts mail by reading barcodes. These worksharing incentives led to a sharp increase in the proportion of automation-compatible letter mail with mailer-applied barcodes, which is considered to have reduced the proportion of mail that USPS employees manually sort. According to USPS, worksharing has significantly reduced USPS compensation costs and the size of the USPS workforce needed to process and handle mail. A 2001 PRC staff study stated that USPS would have required a much larger workforce than it currently has if USPS had to perform all of the worksharing tasks performed by the private sector. The study concluded that worksharing has reduced USPS’s size and likely made USPS more efficient and less difficult to manage. Looking ahead, USPS plans to expand automated sorting of flat-sized mail, such as large envelopes, catalogs, and magazines, which is intended to reduce the need for USPS employees to sort this mail manually and help USPS reduce the cost of sorting flat-sized mail. If new USPS automated equipment is deployed, USPS would be expected to propose modified worksharing requirements for flat-sized mail so that it will be compatible with the new automation equipment. USPS and PRC credit worksharing with benefiting mailers by reducing their total mail-related costs—that is, the cost to the mailer to generate mail pieces and pay the postage costs. The underlying rationale is as follows: When mailers obtain lower worksharing rates, their postage costs are reduced. Mailers’ postage savings are partly offset by their costs of performing worksharing activities. However, mailers have an economic incentive to perform worksharing activities when they realize a net savings—that is, the difference between their reduced postage costs and their increased costs associated with performing worksharing activities. In addition to economic incentives, worksharing is credited by USPS and PRC with helping keep postage rates affordable for all mailers. By stimulating mail volume growth, worksharing has increased the volume of mail that generates revenues that exceed attributable costs and thus helps cover USPS’s institutional costs. Further, according to USPS, worksharing has improved the implementation of its automation program and thereby improved mail processing and handling generally. Specifically, USPS stated that because worksharing of bulk mail facilitated the use and further installation of automation equipment, it reduced USPS’s costs and kept rate increases to a minimum for all mailers, including individuals mailing single pieces of mail like the proverbial Aunt Minnie. Similarly, USPS has reported that worksharing has improved the speed of delivery by helping facilitate the implementation of USPS’s automation program and handling of mail generally. In some cases, mailers reportedly perform worksharing primarily to improve the speed of delivery, such as performing destination entry for periodicals. Other mailers reportedly perform destination entry of packages to improve their speed of delivery and narrow the window when delivery will occur. According to USPS and PRC, worksharing rates were the catalyst for the development of a $900 billion mailing industry that includes USPS; providers of mailing services that do worksharing tasks for mailers; and companies that depend on the mail for service fulfillment, customer acquisition, or customer retention, such as catalog companies, printers, and magazine publishers. Worksharing has enabled the mailing industry to perform tasks that USPS once performed exclusively, particularly in the areas of mail preparation, presorting, and transportation. The mailing industry, including USPS, employed nearly 9 million workers in 2001. Some of the companies that provide mail services are known as consolidators because they combine letter mail, flat-sized mail, or parcels from many mailers in order to achieve sufficient mail volumes to qualify for the lowest possible worksharing rates. According to USPS and PRC, worksharing benefits the nation, in part by lowering business costs, and in part by creating associated benefits that consumers can realize. USPS and PRC have concluded that total mail- related costs to the economy—including costs to mailers and to USPS—are reduced by worksharing. Their rationale is that some postal activities can be performed less expensively by mailers who workshare than by USPS, which lowers the total costs of mail. For example: Many worksharing mailers can organize mail by ZIP Code more inexpensively than USPS. Mailers can prepare workshared mail by using their computers to presort their mailing lists in ZIP Code order and then sequentially printing the addresses on each letter. Many worksharing mailers can use computers to barcode letters and print the barcodes on the letters. In comparison, when USPS processes non-barcoded mail, its automated equipment attempts to read the address and print a barcode. When these attempts are unsuccessful, USPS employees become involved in attempting to read the address and apply a barcode, and if a barcode cannot be applied, the mail is manually sorted. Worksharing rates are designed to create incentives for the lowest-cost provider to perform certain postal activities, which can be either the mailer performing worksharing activities or USPS performing additional activities when mailers do not workshare. The USPS and PRC rationale is as follows: When postage rates are set, estimates are prepared of the costs that USPS is to avoid incurring as a result of the mailers’ worksharing activities. PRC has a guideline for recommending worksharing discounts so that, as a result, the estimated reduction in USPS revenues will equal the estimated reduction in USPS costs. This outcome is often referred to as “100 percent passthrough” of the expected USPS savings to the mailer. That is, the full amount of whatever USPS is expected to save will be passed along to the mailer via the worksharing discount. Worksharing discounts with 100 percent passthrough create an incentive for the lowest-cost provider to do the work. This is because mailers have an incentive to workshare when they save money—which happens in this case when the full amount of whatever USPS is expected to save will be passed along to the mailers, and will be enough to fully offset the mailers’ cost of performing the worksharing activities. Worksharing discounts with less than 100 percent passthrough can still create an incentive for the lowest-cost provider to do the work. This is because some mailers would still have an incentive to save money by worksharing. In this case, the portion of the USPS savings passed along to the mailers would still be enough to fully offset some mailers’ worksharing costs. However, some lowest-cost providers may not have an incentive to workshare because the portion of expected USPS savings passed along to mailers would not be sufficient to fully offset the mailers’ worksharing costs. Worksharing discounts with greater than 100 percent passthrough can create incentives for some highest-cost providers to do the work. In this case, some mailers could be the highest-cost providers that have worksharing costs covered only because USPS passed along more than its expected savings. Moreover, mailers who are lowest-cost providers would also have an incentive to workshare. When the lowest-cost provider performs postal activities, the total cost of mail is reduced. This can reduce the cost of doing business. The economy benefits when the cost of doing business is reduced, whether that entails the cost of sending out bills for merchandise and services rendered or sending out advertisements to generate business. In other words, according to PRC staff, reducing the cost of doing business “increases the economic welfare of the nation.” Consumers may benefit in several ways from USPS’s worksharing program. First, consumers benefit if, as previously discussed, worksharing helps keep postage rates affordable for all mailers. Second, consumers benefit to the extent that lower mail costs are passed along in the form of lower prices for merchandise and services. Third, consumers benefit if their workshared mail is delivered in a more expeditious and reliable manner, as previously discussed. Fourth, consumers benefit if lower mail costs result in more workshared mail, to the extent that this increased mail volume contains information that is useful to the consumer. For example, additional workshared mail could include business correspondence, periodicals, and newsletters that some consumers find useful; catalogs that some consumers respond to; or workshared packages that USPS delivers to consumers at their request. A broad array of postal stakeholders generally express support for the concept of worksharing—that is, they express support for the concept that mailers should receive worksharing discounts in exchange for performing worksharing activities that lower USPS’s costs. However, stakeholders have raised differing concerns in the worksharing area. APWU has generally criticized the worksharing program, while some members of the mailing industry have made diametrically opposing criticisms. For example, APWU has asserted that worksharing discounts are too large, but some members of the mailing industry have asserted that worksharing discounts are not large enough (see table 3). APWU believes that worksharing has eroded USPS’s financial position, thus threatening its ability to support universal postal service. However, some members of the mailing industry, USPS, and PRC disagree with APWU’s assertions. Integral to stakeholder differences are divergent views regarding technical issues relating to the data, assumptions, and analyses used in rate cases to estimate the costs that USPS is to avoid incurring in the test year as a result of mailer worksharing activities. Such cost avoidance estimates affect the size of worksharing discounts that are established for reasons previously described in this report. Further, stakeholders have raised issues regarding (1) the quality and accuracy of the estimates of cost avoidance; (2) the extent to which USPS has avoided costs as a result of worksharing activities performed by mailers; and (3) whether data can be generated on what costs USPS has avoided as a result of mailer worksharing activities. The primary legal basis for worksharing rates is derived from one of the nine factors cited in the Postal Reorganization Act of 1970 that PRC must consider in recommending changes to domestic postage rates proposed by USPS. Specifically, the act requires that, in recommending changes to postage rates, PRC consider nine factors, including “the degree of preparation of mail for delivery into the postal system performed by the mailer and its effect upon reducing costs to USPS.” The nine factors that PRC must consider when recommending domestic postage rates and fees are as follows: the establishment and maintenance of a fair and equitable schedule; the value of mail service actually provided each class of mail or type of mail service to both the sender and the recipient, including, but not limited to, the collection, mode of transportation, and priority of delivery; the requirement that each class of mail or type of mail service bear the direct and indirect postal costs attributable to that class or type plus that portion of all other costs of USPS reasonably assignable to such class or type; the effect of rate increases upon the general public, business mail users, and enterprises in the private sector of the economy engaged in the delivery of mail matter other than letters; the available alternative means of sending and receiving letters and other mail matter at reasonable costs; the degree of preparation of mail for delivery into the postal system performed by the mailer and its effect upon reducing costs to USPS; simplicity of structure for the entire schedule and simple, identifiable relationships between the rates or fees charged the various classes of mail for postal services; the educational, cultural, scientific, and informational value to the recipient of mail matter; and such other factors as PRC deems appropriate. By way of background, presorting of Standard Mail and periodicals by ZIP Code was required before the 1970 act reorganized the U.S. Post Office Department into USPS. In the first rate case under the 1970 act, PRC cited this presorting requirement, and the statutory factor regarding the degree of mail preparation and its effect on reducing USPS’s costs (39 U.S.C. 3622(b)(6)), in its 1972 recommended decision on postage rates, as one of several reasons for recommending lower rates for Standard Mail and Periodicals. In 1976, a unanimous settlement was reached in a reclassification case that recognized the first specific worksharing discount—a 1-cent discount for presorting First-Class Mail by ZIP Code. The implementation of the first worksharing discount for presorted First-Class Mail marked the inception of USPS's worksharing program as it is known today. Subsequent rate cases have expanded worksharing rates to cover most types of mail (see table 4). In addition to the nine factors listed previously, the law specifies that PRC is required to make a recommended decision on domestic postage rates and fees in accordance with the policies of Title 39 of the U.S. Code, which defines policies for USPS. When considering the relevance of Title 39 policies to PRC recommendations on worksharing rates, it is important to keep in mind that these rates represent most of USPS’s entire rate structure and generate 74 percent of its domestic mail volume and 52 percent of its domestic mail revenues. Key Title 39 policies include the following: USPS shall have as its basic function the obligation to provide postal services to bind the nation together through the personal, educational, literary, and business correspondence of the people. To this end, USPS shall provide prompt, reliable, and efficient postal services to patrons in all areas. USPS shall plan, develop, promote, and provide adequate and efficient postal services at fair and reasonable rates and fees. To this end, USPS has the responsibility to maintain an efficient national system of collecting, sorting, and delivering the mail, and to provide types of mail service to meet the needs of different groupings of mail and mail users. However, USPS shall not, except as specifically authorized in Title 39, make any undue or unreasonable discrimination among users of the mails, nor shall it grant any undue or unreasonable preferences to any such user. Postage rates and fees shall be reasonable and equitable and sufficient to enable USPS under honest, efficient, and economical management to maintain and continue the development of postal services of the kind and quality adapted to the needs of the United States. To this end, postage rates and fees shall provide sufficient revenues so that the total estimated income and appropriations to USPS will equal as nearly as practicable total estimated costs of USPS. USPS shall promote modern and efficient operations. USPS should refrain from expending any funds, engaging in any practice, or entering into any agreement or contract (other than an employee-management agreement or contract between USPS and a labor union representing postal employees) that restricts the use of new equipment or devices that may reduce the cost or improve the quality of postal services, except where such restriction is necessary to ensure safe and healthful employment conditions. Worksharing rates and classifications are implemented through federal regulations issued and updated by PRC and USPS. After each rate and classification case is completed, PRC updates the Domestic Mail Classification Schedule to be consistent with the outcome of the case. This schedule is incorporated into the Code of Federal Regulations and lists the terms and conditions for domestic mail classes, subclasses, and rate categories as well as for domestic special services, such as post office boxes, registered mail, and certified mail. Also, after each rate and classification case, USPS updates its Domestic Mail Manual, which is also incorporated into the Code of Federal Regulations, to include the worksharing rates for each specific type of workshared mail as well as the corresponding worksharing requirements. Worksharing rates have been considered in successive postal rate cases— proceedings in which PRC considers USPS proposals for changing postage rates—dating back to the 1970s. These proceedings have established precedents that have further clarified the legal basis for worksharing rates. Over the years, the structure of worksharing rates has evolved. For example, in a 1995 reclassification case, USPS proposed and PRC recommended numerous changes to workshared rates that were intended to provide greater incentives for mailers to barcode their workshared mail, among other things. In addition, PRC also recommended some changes to the structure of workshared mail classifications, such as adding a new subclass to Standard Mail called the Enhanced Carrier Route subclass. This subclass was distinguished from other types of Standard Mail in that minimum volume requirements apply for each carrier route as well as requirements for including mail preparation, barcoding, and presorting, among other things. Enhanced Carrier Route mail receives lower rates in part because of the estimated cost savings to USPS from worksharing. On a related matter, there has been long-standing and continuing debate over whether certain types of postage rates can be offered within existing law and, if so, under what circumstances. Recent debate has focused on rate arrangements with reduced rates agreed to by USPS and individual mailers that were intended to enable USPS to reduce its costs. In February 2002, PRC reported to Congress that rate and service adjustments agreed upon by USPS and individual mailers would be legally authorized if certain conditions are met, notably that the proposed agreement is submitted to PRC for prior review, be made available to other mailers willing to meet the same terms of service, and work to the mutual benefit of mail users and the postal system as a whole. PRC noted that USPS had proposed and PRC had subsequently recommended some “niche classifications” that were specialized classifications that included reduced, but cost-justified, rates or fees. Niche classifications make lower rates available to all mailers when they perform the required activities and meet other requirements of the niche classification. However, as a practical matter, these requirements may be tailored in a way that means few mailers would generate mail that would qualify for inclusion in the niche classification. Recently, USPS proposed and PRC subsequently recommended a negotiated service agreement (NSA) between USPS and Capital One Services, Inc., the nation’s largest-volume mailer of First-Class Mail, on a 3- year contractual basis. According to PRC, “negotiated service agreements are targeted pricing initiatives designed to encourage greater efficiencies and to take advantage of the Postal Service's existing pricing flexibility.” USPS noted that “NSAs, generally, specify mutual agreements between the Postal Service and customers involving the preparation, presentation, acceptance, processing, transportation and delivery of mailings under particular rate, classification and service conditions and restrictions that go beyond those required of other mailers.” This was the first time that USPS proposed and PRC recommended an NSA covering domestic mail. USPS hopes to reinvigorate mail volume growth through this and other yet- to-be proposed NSAs and also to reduce its costs through NSA requirements applying to qualifying mailers. The Capital One NSA, which USPS’s Governors approved in June 2003, specifies that Capital One is to receive lower rates for bulk First-Class Mail exceeding 1.225 billion pieces of mail annually in each of the next 3 years, with rate discounts increasing from 3 to 6 cents as volumes increase above the annual threshold. During this period, USPS will electronically provide Capital One with information about its undeliverable First-Class Mail solicitations instead of physically returning the mail to Capital One. USPS has stated that this change will result in USPS cost savings, estimating that it will avoid returning approximately 80 million mail pieces per year to Capital One during the term of the NSA. In addition, under the NSA agreement, Capital One has agreed to practices intended to produce accurate address lists, which relate to minimizing the quantity of undeliverable and forwarded mail that USPS must handle. Another provision of the NSA specifies that the total amount of the discounts is limited to $40.6 million over the NSA’s 3-year term. This limit is intended to reduce the risk that the NSA discounts could reduce USPS revenues more than the costs that USPS avoids as a result of the NSA. We received written comments on a draft of this report from the Chairman of the Postal Rate Commission dated July 15, 2003, and the Chief Marketing Officer and Senior Vice President of the Postal Service dated July 16, 2003. The USPS and PRC comments are summarized below and reprinted in appendices III and IV, respectively. In addition, PRC and USPS officials provided technical and clarifying comments. All technical and clarifying comments were incorporated where appropriate. The Chairman commented that worksharing rates “have provided major impetus for improved productive efficiency in postal services and stimulated the mail volume growth that has had the effect of moderating rate increases for all mail classes and services.” He stated that the approach used to develop worksharing rates means that to the extent practicable, the rates paid by mailers who do not participate in worksharing do not have to increase because worksharing discounts are approved. He also commented that our draft report accurately describes the types of worksharing rates currently available to mailers and fairly characterizes the major policy reasons justifying current workshare programs. The Chief Marking Officer and Senior Vice President commented that USPS believes that, overall, worksharing benefits USPS, the mailers, and the entire economy. She stated that worksharing enhances efficient postal operations and stimulates mail growth and revenue for USPS; reduces overall mailer costs and has encouraged development of the presort and direct mail industries; and “benefits the entire economy, because reduced mailing costs increase productivity and efficiency.” She also commented that our draft report appeared to define the scope of worksharing in a slightly different manner than USPS does, and that, within this definition, we seemed to encompass all of the rates within all of the bulk mail classes as well as some NSAs and niche classifications. We clarified our report to address USPS’s comments. We are sending copies of this report to the Chairmen of the House Committee on Government Reform and its Subcommittee on Civil Service and Agency Reorganization; the Chairman and Ranking Minority Member of the Senate Committee on Governmental Affairs; the Chairman of its Subcommittee on Financial Management, the Budget, and International Security; the Postmaster General; the Chairman of the Postal Rate Commission; and other interested parties. Copies will be made available to others on request. In addition, this report will also be available on our Web site at http://www.gao.gov. Key contributors to this report are listed in appendix V. If you or your staffs have any questions about this letter or the appendixes, please contact me at (202) 512-2834 or at [email protected]. Our objectives for this report were to provide summary information on the following questions: (1) What are the key activities included in postal worksharing? (2) What is the rationale for worksharing, according to the U.S. Postal Service (USPS) and the Postal Rate Commission (PRC), the independent federal establishment that reviews USPS proposals for changes in domestic postage rates? and (3) What is the legal basis for establishing worksharing rates? To address these objectives, we reviewed documentation of federal laws and regulations pertinent to worksharing, including USPS and PRC regulations; USPS requirements for mailer worksharing activities, such as USPS publications describing these requirements; and documentation of worksharing matters addressed in rate cases, based on publicly available information filed in postal rate cases. The information documented, among other things, USPS proposals for new worksharing rates, PRC recommended decisions, and USPS responses to these recommended decisions. We reviewed USPS data on workshared mail that had been filed with PRC, such as trend data on the volumes of different types of mail. We compiled the total volumes of workshared mail on the basis of these data. Other data covered workshared mail revenues and the contribution that workshared mail has made to help cover USPS’s institutional costs. We compiled data on the estimated USPS savings from mailer worksharing activities for automation-compatible letters sent via First-Class Mail, using the same methodology used by PRC in the 2000 rate case—the most recent rate case in which PRC recommended a methodology for making such projections. We did not independently assess or verify any of the data to determine their accuracy, nor did we assess or evaluate differences between PRC and USPS costing methodologies. To obtain a better understanding of how USPS processes workshared mail, we visited USPS mail processing facilities in Orlando and Tampa, Florida, and Baltimore and Gaithersburg, Maryland, to observe how USPS processes workshared mail. These facilities were judgmentally selected on the basis of their characteristics and their geographic proximity to our headquarters and to mailer facilities that we also visited. We interviewed USPS officials at those facilities. In addition, to obtain a better understanding of how mailers prepare workshared mail, we visited mailer facilities in Apopka and Tampa, Florida, where workshared mail is prepared. Specifically, we visited the facilities of the Apopka facility of Sprint, where it prepares bills and statements; the Tampa facility of Regulus, where it receives remittance mail on behalf of other companies; and the Tampa facility of AOL/Time Warner where bills and statements are prepared. We observed how these facilities prepare workshared mail and interviewed representatives of these companies. These companies and facilities were judgmentally selected, based on their preparation and receipt of different types of workshared mail sent via First- Class Mail, invitations from their officials for GAO to make site visits, and their geographic proximity to each other. First-Class Mail represents a major portion of workshared mail. We also reviewed documentation of the rationale for worksharing, including rate case materials; published papers and analyses; testimony in 2003 to the President’s Commission on the United States Postal Service; and material provided to us by representatives of USPS, PRC, mailer groups, and the American Postal Workers Union (APWU). We interviewed representatives of groups that filed material on worksharing issues in the most recent rate case that resulted in increases in most worksharing rates, including representatives of USPS, PRC, PRC’s Office of the Consumer Advocate, the American Bankers Association, APWU, the Association of Postal Commerce, the Direct Marketing Association, the Mail Order Association of America, the Mailing and Fulfillment Services Association, the Major Mailers Association, the National Association of Presort Mailers, and the Saturation Mail Coalition. Some of these groups provided us with analyses and other material pertaining to worksharing rates and issues. In addition, we reviewed published books, articles, and other communications written by these groups and other postal experts on worksharing rates and issues. We did not assess the benefits that USPS and PRC claimed are derived from worksharing. We also did not assess any of the documentation provided by stakeholders or any of the statements made by stakeholders that we interviewed. To obtain information on the legal basis for worksharing, we reviewed pertinent laws, decisions in postal rate cases interpreting legal criteria for worksharing rates, and pertinent USPS and PRC regulations. We conducted our review from June 2002 through June 2003 in Apopka, Tampa, and Orlando, Florida; Baltimore and Gaithersburg, Maryland; and Washington, D.C., in accordance with generally accepted government auditing standards. Type of USPS worksharing-related savings 1. Automated barcoding: If letters were not workshared, they would not be barcoded and USPS automated equipment would try to read the address and apply a barcode. When this is unsuccessful, an electronic image of the letter would be sent to a facility where a USPS clerk would try to read the address and key in data so that a barcode could be applied. 2. Manual operations: If the letters were not workshared, USPS would engage in more costly manual processing of the mail. For example, the letters would be sorted manually when USPS automated equipment could not apply a barcode or sort the mail. Worksharing requirements attempt to minimize such problems by limiting the dimensions and thickness of letters, specifying requirements for updating of addresses, and mandating how the address and barcode are printed, among other things. 3. Automated operations: If the letters were not workshared, they would not be presorted and thus would require more sorting by USPS automated equipment. Also, very large workshared mailings are organized on pallets with each pallet containing mail sent to only one area. In such cases, the mail would generally not need to be sorted by USPS automated equipment at the originating mail processing facility to organize it according to the area where it is to be delivered. Instead, the mail can be handled on the loading dock and dispatched to the area where it is to be delivered. 4. Allied operations: If the letters were not workshared, they would generate more USPS “allied labor” costs to prepare mail for processing or dispatch, either on the loading dock or inside the mail processing facility. As in the above example of very large workshared mailings organized on pallets sent to only one area, each pallet can be handled in the loading dock and dispatched to the destinating facility without having allied labor separately handle each mail tray stacked on the pallet. 5. Other: If the letters were not workshared, USPS would incur more costs to distribute them to post office boxes and prepare them for delivery, among other things. Kenneth E. John, Charles W. Bausell, Jr., Alan N. Belkin, Frederick T. Evans, Eric Fielding, Latesha A. Love, Mark F. Ramage, Jill P. Sayre, and Walter K. Vance made key contributions to this report. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
The U.S. Postal Service (USPS) faces major financial, operational, and human capital challenges that call for a transformation if USPS is to remain viable in the 21st century. Given these challenges, the President established a commission to examine the state of USPS and submit a report by July 31, 2003, with a proposed vision for USPS and recommendations to ensure the viability of postal services. The presidential commission has addressed worksharing (activities that mailers perform to obtain lower postage rates) in the course of its work. About three-quarters of domestic mail volume is workshared. Worksharing is fundamental to USPS operations, but is not well understood by a general audience. To help Congress and others better understand worksharing, GAO was asked to provide information on the key activities and the rationale for worksharing and the legal basis for worksharing rates. GAO discusses USPS's and the Postal Rate Commission's rationale for worksharing but did not assess the benefits that they claimed for worksharing. GAO will issue a second report later this year on worksharing issues raised by stakeholders. In commenting on this report, USPS and the Postal Rate Commission reemphasized the benefits of worksharing. Postal worksharing activities generally involve mailers preparing, sorting, or transporting mail to qualify for reduced postage rates, that is, worksharing rates. These rates are based on what are referred to as worksharing discounts because the rates are reduced based on the costs that USPS is estimated to avoid as a result of mailer worksharing activities. Key activities include (1) barcoding and preparing mail to be sorted by USPS automated equipment, which reduces manual sorting; (2) presorting mail by ZIP Code or specific delivery location, which reduces USPS sorting; and (3) entering mail at a USPS facility that generally is closer to the final destination of the mail. Worksharing also requires mailers to perform numerous other activities, such as updating addresses to improve their accuracy. According to USPS and the Postal Rate Commission, the rationale for worksharing is that it benefits USPS, mailers and the mailing industry, and the nation. They said worksharing benefits (1) USPS by enabling it to improve its operations and thereby help minimize its workforce and infrastructure, and by stimulating mail volume growth that generates revenues to cover rising costs; (2) mailers by reducing mail-related costs and improving delivery service, and the mailing industry that performs worksharing activities; and (3) the nation, in part by lowering business costs, and in part by the associated benefits that consumers can realize. While stakeholders generally support the concept of worksharing, they have raised differing concerns in this area. For example, the American Postal Workers Union has asserted that worksharing discounts are too large, but some mailers and members of the mailing industry have asserted that the worksharing discounts are not large enough. The primary legal basis for worksharing rates is the requirement in law that, when recommending postage rates, the Postal Rate Commission consider mail preparation and its effect upon reducing USPS costs. Postal rate cases have established precedents clarifying the basis for worksharing rates.
Financial literacy, which is sometimes also referred to as financial capability, has been defined as the ability to use knowledge and skills to manage financial resources effectively for a lifetime of well-being. Financial literacy encompasses financial education—the process whereby individuals improve their knowledge and understanding of financial products, services, and concepts. However, to make sound financial decisions, individuals need to be equipped not only with a basic level of financial knowledge, but also with the skills to apply that knowledge to financial decision making and behaviors. The federal government plays a wide-ranging role in promoting financial literacy, and the multiagency Financial Literacy and Education Commission, which was created in 2003 by the Fair and Accurate Credit Transactions Act of 2003, was charged with, among other things, developing a national strategy to promote financial literacy and education, coordinating federal efforts, and identifying—and proposing means of eliminating—areas of overlap and duplication. The commission is currently comprised of 21 federal entities; its Chair is the Secretary of the Treasury and its Vice Chair, as established in the Dodd-Frank Wall Street Reform and Consumer Protection Act (Dodd-Frank Act), is the Director of CFPB.literacy resources, including nonprofit community-based organizations; consumer advocacy organizations; financial services companies; trade associations; employers; and local, state, and federal government entities. Some financial literacy initiatives are aimed at the general population, while others target certain audiences, such as low-income individuals, military personnel, high school students, seniors, or homeowners. Similarly, some financial literacy initiatives cover a broad array of concepts and financial topics, while others target specific topics, such as managing credit, investing, purchasing a home, saving for retirement, or avoiding fraudulent or abusive practices. A wide variety of other organizations also provide financial Efforts to improve financial literacy can take many forms. These can include one-on-one counseling; curricula taught in a classroom setting; workshops or information sessions; print materials, such as brochures and pamphlets; and mass media campaigns that can include advertisements in magazines and newspapers or on television, radio, or billboards. Many entities use the Internet to provide financial education, which can include information and training materials, practical tools such as budget worksheets and loan and retirement calculators, and interactive financial games. Youth-focused financial education programs are generally tied to a school curriculum. In fiscal year 2010, the federal government spent about $68 million on 15 of its 16 significant financial literacy programs and about $137 million on 4 programs providing housing counseling, which can include elements of financial education. We identified 16 significant financial literacy programs or activities within the federal government in fiscal year 2010. As seen in table 1, the estimated cost for 15 of these programs and activities was $68 million. This figure does not include estimated costs for CFPB, which was created during fiscal year 2010, or costs related to housing counseling, which is discussed separately. Two of these federal financial literacy programs or activities were funded through a congressional appropriation for the specified program. The Excellence in Economic Education Program was appropriated about $1.45 million in fiscal year 2010, and the Department of Education obligated almost all of the amount to fund a single 5-year grant to a national nonprofit education organization. The National Education and Resource Center on Women and Retirement Planning was appropriated $249,000 for fiscal year 2010, and the Department of Health and Human Services obligated about $246,000 of that amount that year. For most of the other programs, financial literacy activities were typically not organized as separate budget line items or cost centers within federal agencies. Instead, financial literacy activities were often one element of a broader effort that itself may or may not have had discrete funding. In these cases, we asked agency staff to estimate the portion of program costs that could be attributed to financial literacy activities. This typically entailed estimating the cost for the portion of staff time devoted to financial literacy, and sometimes also included the cost of contracts, printing, or other resources related to financial literacy activities. Because the methods for estimating costs varied, these costs may not be fully comparable across agencies. We did not collect comprehensive information for costs subsequent to fiscal year 2010, but spending on many financial literacy programs has been in flux since that time. For example, the Social Security Administration’s Financial Literacy Research Consortium and the Financial Education for College Access and Success Program did not receive new funding in fiscal years 2011 or 2012, and the Excellence in Economic Education Program did not receive funding for fiscal year 2012. In addition, at least two agencies—the Department of the Treasury (Treasury) and the Board of Governors of the Federal Reserve System— told us that their staffing resources devoted to financial literacy have declined since 2010. We did not identify any new federal financial literacy programs created since fiscal year 2010 other than CFPB, which was being formed as an agency that year. As shown in table 2, two federal agencies and a federally chartered nonprofit corporation spent about $136.6 million on housing counseling efforts during fiscal year 2010. We have separated out the costs of housing counseling from other financial literacy activities because financial education typically is only a limited aspect of most housing counseling, which often largely consists of one-on-one assistance to address individual situations. As seen above, the Department of Housing and Urban Development (HUD) obligated about $65.4 million during fiscal year 2010 for its Housing Counseling Assistance Program, which it used for certifying and overseeing housing counseling providers, training housing counselors, and providing counseling agencies with competitive grants. HUD also has 15 other active programs that have some housing counseling component or allow some portion of their funding to be used for housing counseling.In addition, NeighborWorks America, a federally chartered nonprofit corporation, was appropriated $65 million for the National Foreclosure Mitigation Counseling Program during fiscal year 2010, of which it expended $59.4 million in grants for counseling, $3 million for training counselors, and $2.6 million for administrative purposes, according to agency staff. The organization also estimated that it spent about $2 million on other housing counseling activities—primarily prepurchase counseling—from funds it received through a separate congressional appropriation. Treasury’s Financial Education and Counseling Pilot Program, created by the Housing and Economic Recovery Act of 2008, provided $4.15 million in grants during fiscal year 2010 to provide counseling to prospective homebuyers.million for an eligible organization in Hawaii, and Treasury also selected three additional organizations to receive grants. In general, funding for housing counseling has varied in recent years. For example, HUD received no appropriated funds for its Housing Counseling Assistance Program in fiscal year 2011, but funding was restored to $45 million in fiscal year 2012. The agency has requested $55 million for the program in its fiscal year 2013 budget request, which it said would help support the Office of Housing Counseling, a new office created by the Dodd-Frank Act. The Financial Education and Counseling Pilot Program was appropriated no funds in fiscal years 2011 and 2012. Federal financial literacy efforts are carried out by multiple federal programs and agencies. As shown in table 3, in fiscal year 2010 there were 16 significant federal financial literacy programs or activities among 14 federal agencies, as well as 4 housing counseling programs among 2 federal agencies and a federally chartered nonprofit corporation. These programs and activities covered a wide range of topics and target audiences and used a variety of delivery mechanisms. In prior work, we cited a 2009 report that had identified 56 federal financial literacy programs among 20 agencies. That report, issued by the RAND Corporation, was based on a survey conducted by Treasury and the Department of Education that had asked federal agencies to self- identify their financial literacy efforts. However, our subsequent analysis of these 56 programs found a high degree of inconsistency in how different agencies defined financial literacy programs or efforts and whether they counted related efforts as one or multiple programs. We believe that our count of 16 significant federal financial literacy programs or activities and 4 housing counseling programs is based on a more consistent set of criteria. (See app. II for a crosswalk between the 56 programs cited in a previous report and the 20 programs highlighted in this report.) We defined “significant” financial literacy programs or activities as those that were relatively comprehensive in scope or scale— that is, financial literacy was a key element rather than a minimal component or tangential goal. We did not include programs or activities that (1) provided financial information related to the administration of the program itself—such as information on applying for student financial aid or evaluating Medicare choices—rather than information aimed at increasing the beneficiaries’ financial literacy and comprehension more generally; (2) were purely internal to the agency, such as information provided to agency employees on their employment and retirement benefits; or (3) represented individualized services or advice, such as assistance with tax preparation. Apart from the programs cited in the tables above, some additional federal agencies address financial literacy on a smaller scale. For example, the website of the Federal Deposit Insurance Corporation (FDIC) includes such things as tips on banking and protecting your money, and information on foreclosure prevention, identity theft, and deposit insurance. In addition, the website of the Commodity Futures Trading Commission provides information on fraud awareness and prevention related to trading futures and options. Fragmentation of financial literacy programs has evolved over a number of years, as a result both of statutory requirements and efforts undertaken at the initiative of federal agencies in addressing their missions.Congress directed the creation of some programs and initiatives, examples of which include the following: The Office of Personnel Management’s Retirement Readiness NOW program and the development of a retirement financial literacy strategy for federal employees were required by the Thrift Savings Plan Open Elections Act of 2004. The Financial Education and Counseling Pilot Program was created by the Housing and Economic Recovery Act of 2008. The Financial Education for College Access and Success Program was authorized under the Fund for the Improvement of Education Program under the Elementary and Secondary Education Act of 1965. The National Foreclosure Mitigation Counseling Program was authorized through the Consolidated Appropriations Act, 2008, which sought to address the mortgage foreclosure crisis by providing homeowner counseling and strengthening the nation’s counseling capacity. Pub. L. No. 110-289, § 1132, 122 Stat. 2654, 2727 (2008), 12 U.S.C. § 1701x Note. CFPB was created by the Dodd-Frank Act, which specified the creation of the bureau’s Office of Financial Education and its role in promoting financial literacy. Other financial literacy programs were initiated by agencies as part of their mission. For example, in line with the Securities and Exchange Commission’s (SEC) mission as the primary overseer and regulator of the U.S. securities markets, the agency created the Office of Investor Education and Advocacy, which gives investors information to evaluate current and potential investments, make informed decisions, and avoid fraudulent schemes. Similarly, the Federal Trade Commission’s (FTC) financial literacy efforts have stemmed from its responsibilities for enforcing laws and regulations against unfair or deceptive acts or practices and protecting consumers in the marketplace. Having multiple federal agencies involved in financial literacy efforts can have certain advantages. Some agencies have deep and long-standing expertise and experience addressing a specific issue area. For example, HUD has long been a repository for information on housing issues, SEC on investment issues, and the Department of Labor and Social Security Administration on retirement issues. Some agencies also have deep knowledge and ties to particular populations and may be the most efficient and natural conduit to providing them with information and services, as with the Department of Defense’s (DOD) role in providing financial information and counseling to servicemembers and their families. In addition, providing information from multiple sources or in multiple formats can increase consumer access and the likelihood of educating more people. We have previously reported that different populations respond to different types of delivery mechanisms, such as one-on-one credit counseling, employer-provided retirement seminars, and classroom-based education. At the same time, fragmentation increases the risk of inefficiency and duplication of efforts. Our detailed review of financial literacy efforts across the federal government has uncovered no duplication—that is, cases where two or more agencies or programs were engaging in the same efforts and providing the same services to the same beneficiaries. In our analysis of the 20 significant financial literacy and housing counseling programs, we found that programs and efforts had differing focuses in terms of subject matter, target audience, or delivery method. This finding is largely consistent with prior reviews of the federal government’s financial literacy efforts. In 2006, the Financial Literacy and Education Commission reported that it had studied federal financial literacy programs or resources and said it found minimal overlap and duplication among programs, noting that even when different agencies’ programs sometimes appeared similar, closer inspection revealed important differences in things like the target audience, delivery platform, or specific content. In response to a recommendation we made that the commission engage an independent third party to assess these issues, two subsequent studies were conducted. The first study, contracted by Treasury to assess federal programs, reported little evidence of duplication of programs or resources based on comparisons of the intended program goals and targeted audiences of the assessed programs and major resources. The second study resulted in the previously discussed 2009 report by the RAND Corporation, which sought to create a comprehensive catalog of existing federal financial literacy programs. It did not identify clear duplication, but it did note that multiple areas of overlap in subject matter and target audiences warranted more thorough investigation. Our review did identify cases of overlap—that is, multiple agencies or programs with similar goals and activities. For example, as shown earlier, in fiscal year 2010 there were four discrete housing counseling programs or activities, which were administered by HUD, NeighborWorks America, and Treasury.wide range of housing counseling, including prepurchase and postpurchase counseling and counseling related to foreclosure mitigation and prevention of predatory lending, as well as counseling services for HUD’s Housing Counseling Assistance Program funded a renters and homeless populations.Counseling Pilot Program had goals similar to HUD’s program, although it focused solely on prepurchase counseling and was intended, in part, to establish innovative program models for organizations to carry out effective counseling services. NeighborWorks also provided some prepurchase counseling and administered the foreclosure mitigation counseling program designed to help homeowners work with lenders to cure delinquencies. HUD and NeighborWorks meet regularly and closely coordinate activities to be complementary, according to HUD staff. Treasury’s Financial Education and Similarly, five different financial literacy programs were directed at youth or young adults in fiscal year 2010. Three of these programs—Money Smart for Young Adults, Money Math, and the National Financial Capability Challenge—delivered information on similar topics, such as saving, budgeting, and borrowing, largely via instructor-led lesson plans. The Excellence in Economic Education Program and the Financial Education for College Access and Success Program both supported the development of personal finance instructional materials and teacher training on personal finance. In addition, FTC addresses youth financial literacy through an interactive website where youth can play games, design advertisements, and learn about activities related to target marketing, supply and demand, privacy protection, and bogus offers. The website of the Board of Governors of the Federal Reserve System also offers interactive games and classroom activities on its website for youth and young adults. Treasury staff told us that while all of these programs serve youth or young adults, there are significant variations among them in approach and in content. The staff noted, for example, that the goal of the National Financial Capability Challenge is to encourage the teaching of financial topics, rather than to provide content, and that the curricula of the Money Math and Money Smart for Young Adults programs differ substantially from each other. Another example of overlap can be found in two federal financial literacy programs designed specifically for adult women. The Department of Labor’s Wi$eUp program targeted Generation X and Y women—women generally born between the mid-1960s and the mid-1990s—and the Department of Health and Human Services’ National Education and Resource Center on Women and Retirement Planning targeted traditionally hard-to-reach women, such as low-income women, women of color, and women with limited English proficiency. Both programs cover some of the same topic areas, such as retirement planning, investing, and money basics such as budgeting, saving, and banking. However, staff at the Department of Labor and the Department of Health and Human Services noted that the programs target different users, have different goals, and engage in different activities—for example, Wi$eUp is an online and classroom curriculum, while the National Resource Center uses peer counselors and offers information through model programs, workshops tailored to meet special needs, and print and web-based publications. Additional overlap is evident with the activities of CFPB, which was created by the Dodd-Frank Act and became a standing organization in July 2011. The act established within CFPB an Office of Financial Education and charged it with developing and implementing initiatives intended to educate and empower consumers to make better informed financial decisions. Specifically, the office was directed to provide opportunities for consumers to access, among other things, financial counseling; information to assist consumers with understanding credit products, histories, and scores; information about savings and borrowing tools; and assistance in developing long-term savings strategies and wealth building. The duties of this office are in some ways similar to those of Treasury’s Office of Financial Access, Financial Education, and Consumer Protection, a small office that also seeks to broadly improve Treasury established this office in 2002 Americans’ financial literacy.and tasked it with developing and implementing financial education policy initiatives and overseeing and coordinating Treasury’s outreach efforts. Further, the Dodd-Frank Act charged CFPB with developing and implementing a strategy on improving the financial literacy of consumers, even though the Financial Literacy and Education Commission already has its own statutory mandate to develop, and update as necessary, a national strategy for financial literacy. CFPB staff told us that its own national strategy for financial literacy will serve as an operating plan that is distinct from, but broadly aligned with, the commission’s national strategy. Staff involved in financial literacy from Treasury and CFPB told us that they meet regularly and that the two agencies are working closely together to ensure collaboration and avoid duplication. CFPB also has other offices that are charged with financial literacy duties that are in some ways similar to those of other federal agencies. For example, the Dodd-Frank Act created within CFPB an Office of Servicemember Affairs, which is responsible for, among other things, developing and implementing initiatives intended to educate servicemembers and their families and empower them to make better informed decisions regarding consumer financial products and services, monitoring complaints, and coordinating efforts among federal and state agencies regarding consumer protection measures. These activities potentially overlap with those of DOD, whose Personal Financial Managers on military installations provide financial educational programs, partnerships, counseling, legal protections, and other resources designed to help servicemembers and their families. Staff of CFPB’s Office of Servicemember Affairs told us that the office has been actively reaching out to servicemembers where they live in order to assess their needs, and between January 2011 and May 2012, the office held 84 events attended by more than 24,000 people and visited 37 military installations and National Guard units. Staff also told us that they have taken several steps to avoid duplicating DOD’s Financial Readiness Program. For example, they said they will be focusing on reaching servicemembers in the Delayed Entry Program, a period prior to boot camp during which DOD does not yet engage in financial education. In addition, CFPB staff said they have been meeting monthly with DOD’s Deputy Assistant Secretary of Defense for Military Community and Family Policy and his staff to coordinate their activities to avoid duplication across agencies. CFPB and DOD have also developed two Joint Statements of Principles, one on how they are going to handle complaints and the other on educational efforts and small-dollar lending. In addition, CFPB and several other agencies provide financial literacy services that target older Americans. The Dodd-Frank Act created the Office of Financial Protection for Older Americans within CFPB and charged the office to develop goals for programs that provide financial literacy and counseling to help seniors recognize the warning signs of unfair, deceptive, or abusive practices, and protect themselves from such practices. These activities potentially overlap with those of FTC, which also plays a role in helping seniors avoid unfair and deceptive practices. For example, FTC has provided information to seniors on a range of topics, such as obtaining credit over the age of 62, avoiding charity fraud, recognizing and reporting telemarketing fraud, and avoiding scammers who may pose as friends, family, or government agencies. In an effort to work together and avoid duplication, CFPB and FTC finalized a memorandum of understanding in January 2012 to help, among other things, cooperate on consumer education efforts, promote consistency of messages, and maximize the use of educational resources. The Dodd- Frank Act also charged the Office of Financial Protection for Older Americans to develop goals for programs that provide one-on-one financial counseling on long-term savings and later-life economic security. As discussed earlier, the Department of Labor’s Saving Matters Retirement Savings Education Campaign also plays a role in educating consumers on retirement issues and the Social Security Administration had a special initiative, as part of an earlier strategic plan, to encourage saving and to inform the public about its programs. In January 2012, staff at CFPB told us that the Office of Financial Protection for Older Americans had only recently become fully staffed and that the office had begun working with other federal and state agencies to identify best practices for educating and counseling senior citizens, identifying unfair and deceptive practices targeting this population, and advocating on their behalf. Other potential areas of overlap include CFPB’s Office of Fair Lending and Equal Opportunity, which plays a role in providing education on fair lending, as do the Office of the Comptroller of the Currency, FDIC, and FTC. CFPB has also created an Office for Students to work with complaints and questions regarding student loans. However, the Department of Education already has a number of web-based tools in place to help students understand financial aid and student loans. CFPB staff told us that one key distinction is that CFPB addresses private student loans, while the Department of Education addresses federally supported student loans. They also noted that they are coordinating with the Department of Education and have developed a memorandum of understanding with the department, and have jointly designed a standard student loan award letter and fact sheet. The Dodd-Frank Act gave CFPB a primary role in addressing financial literacy, and the agency’s Office of Financial Education—staffed at 10 full- time equivalents as of June 2012—has significant financial literacy resources relative to many other agencies. Yet there are similarities in mission between CFPB’s statutory responsibilities and those of certain other federal entities. As we have noted in the past, federal programs contributing to the same or similar results should collaborate to help ensure that goals are consistent and, as appropriate, program efforts are mutually reinforcing. Collaborating agencies should work together to define and agree on their respective roles and responsibilities and, in doing so, clarify who will do what and organize their joint and individual efforts. As noted above, during its initial development, CFPB has been meeting with other federal entities to coordinate their efforts. Ensuring clear delineation of the respective roles and responsibilities between CFPB and agencies with overlapping financial literacy responsibilities is essential to help ensure efficient use of resources. GAO, Managing for Results: Using the Results Act to Address Mission Fragmentation and Program Overlap, GAO/AIMD-97-146 (Washington, D.C.: Aug. 29, 1997). that Congress consider options for such consolidation, the commission is better positioned to do so and this would be consistent with its statutory responsibility to address overlap and duplication. In addition to CFPB’s efforts cited above, there has been a significant amount of coordination among other federal agencies with regard to their financial literacy efforts, as well as evidence of collaboration among federal agencies and state, local, nonprofit, and private entities. The Financial Literacy and Education Commission has played a key role in fostering this coordination and collaboration. However, its national strategy does not include a discussion of the appropriate allocation of federal resources. Federal agencies involved in addressing financial literacy have a variety of mechanisms for coordinating their efforts, examples of which include the following: The Government Interagency Group is a working group of program- level staff from federal agencies that address financial literacy. The group meets three times a year to share ideas and best practices. The group is organized by the American Savings Education Council, a nonprofit organization and national coalition of public and private sector institutions focused on savings and retirement planning. The Department of Education, FDIC, and the National Credit Union Administration signed an agreement in November 2010 designed to encourage partnerships between schools, financial institutions, federal grantees, and other stakeholders to educate students about saving, budgeting and making wise financial decisions. SEC has partnered with the Department of Labor to develop guidance to help individuals understand the operations and risks of target-date fund investments, which are often mutual funds that change automatically to become more conservative as the fund’s target date approaches. SEC has also worked with the Internal Revenue Service to include an insert about SEC’s investor education resources, including its Investor.gov education website, in the mailing of tax refund checks. As part of its Retirement Financial Literacy and Education Strategy for federal employees, the Office of Personnel Management has efforts under way to provide training and tools to the benefits officers of individual federal agencies, and to identify existing resources that federal agencies might use for the financial education of their employees. The Department of Labor and the Social Security Administration have worked together with AARP—a nonprofit organization focused on people age 50 and over—to host workshops for workers nearing retirement. In addition to interagency coordination, the federal government has certain mechanisms in place to coordinate or partner with nonfederal entities, including states and localities and nonprofit and private entities. In January 2010, the President’s Advisory Council on Financial Capability was created by executive order. The council was tasked with a number of specific charges, including advising the President and the Secretary of the Treasury on: financial education efforts, promoting financial products and services that are beneficial to consumers (especially low- and moderate-income consumers), and promoting understanding of effective use of such products and services. In its January 2012 interim report, the council recommended that Treasury support a newly created private- sector award program recognizing employers that provide outstanding financial education to their employees. The council meets regularly and has established subcommittees to address issues related to research and evaluation, partnerships between the public and private sectors, expanding financial access to low- and moderate-income households, and youth. To facilitate and advance financial literacy at the state and local levels, the Financial Literacy and Education Commission created the National Financial Education Network for State and Local Governments in April 2007. Network members include state and local agencies and national organizations that share information through activities including periodic conference calls and a web-based database of financial literacy projects and programs. Some federal agencies also partner with nonprofit and private organizations to expand outreach. Many federal agencies are members of Jump$tart Coalition for Personal Financial Literacy, a nonprofit partnership that focuses on financial literacy for young adults. Treasury partnered with Jump$tart, the University of Missouri-St. Louis and Citigroup to develop Money Math: Lessons for Life, a financial literacy curriculum supplement for educators. FDIC has signed collaboration agreements or reached informal agreements with more than 1,200 active “alliance members” that promote or enhance the implementation of its Money Smart curriculum. Alliance members include financial institutions, schools or other educational service providers, military installations, community-based organizations, faith-based groups, employment and training service providers, government agencies, and other organizations. Likewise, the Board of Governors of the Federal Reserve System participates in Bank On USA programs, which are locally led coalitions of government agencies, financial institutions, and community organizations that focus on financial education and access for individuals and families who do not use mainstream financial institutions. Federal agencies also collaborate with nonfederal entities with regard to financial literacy through the process of administering grants. For example, HUD provides training, guidance, and technical assistance to a network of community-based counseling agencies that it funds through its Housing Counseling Assistance Program. HUD also works with NeighborWorks, which is partially funded through HUD, in implementing the National Foreclosure Mitigation Counseling Program. Additionally, the Department of Agriculture collaborates with land-grant universities on financial literacy projects through grants provided by its National Institute of Food and Agriculture. Also, DOD has collaborated with land-grant universities to offer programs and classes for military families and veterans. Federal agencies have also collaborated with academic researchers and organizations on financial literacy research and product development. For example, in October 2008, Treasury and the Department of Agriculture convened a National Research Symposium on Financial Literacy and Education that sought to identify gaps in existing research and develop research priorities. Twenty-nine experts in the fields of behavioral and consumer economics, financial risk assessment, and financial education evaluation joined to summarize existing financial research findings, identify gaps in the literature, and define and prioritize questions for future analysis. In addition, through the Financial Literacy Research Consortium funded in fiscal years 2009 and 2010, the Social Security Administration worked with Boston College, RAND Corporation, and the University of Wisconsin to develop financial literacy educational tools and programs focusing on retirement savings and planning. In general, we found that coordination and collaboration among federal agencies with regard to financial literacy has improved in recent years, in large part due to the efforts of the Financial Literacy and Education Commission. As noted earlier, the commission is currently comprised of 21 federal entities and was charged with, among other things, coordinating federal financial literacy efforts and promoting partnerships among federal, state, and local governments; nonprofit organizations; and private enterprises. Before the formation of the commission, agencies had no formal mechanism within the federal government through which to coordinate on financial literacy activities. In a 2006 report, we noted that the commission enhanced communication and collaboration among agencies involved in financial literacy by creating a single focal point for federal agencies to come together on the issue of financial literacy. The commission also developed a national strategy that included calls to action on interagency efforts. Additional activities undertaken by the commission to foster coordination or collaboration include the following: Meetings and working groups. The commission holds formal meetings three times per year and, at the staff level, has several working groups, each represented by several federal agencies, including teams devoted to implementing the national strategy, promoting research and evaluation, and improving financial access. MyMoney.gov website. The commission was charged by statute with developing a financial education website that provides a coordinated point of entry for information about federal financial literacy programs and grants. The commission launched the MyMoney.gov website in October 2004. Clearinghouse of research and resources. The commission is in the process of developing a clearinghouse of federal research and resources on financial literacy. This clearinghouse will aggregate financial literacy research and information across federal agencies in one public website. Reviews of federal activities. As discussed earlier, the commission and Treasury contracted for two reports that cataloged and reviewed financial literacy efforts across the federal government, which helped inform federal agencies of each other’s activities and foster opportunities for coordination and collaboration. In April 2006, the Financial Literacy and Education Commission issued a national strategy, which it was required by law to develop and modify as necessary, and in December 2010, it issued Promoting Financial Success in the United States: National Strategy for Financial Literacy 2011. In our 2006 report, we found that the commission’s first national strategy was a useful first step in focusing attention on financial literacy but was largely descriptive rather than strategic. We noted that the strategy only partially included certain characteristics that we consider to be desirable in any national strategy, including a description of resources required to implement the strategy. Our review of the 2011 national strategy indicates that it still does not fully address this element. An effective national strategy should include a discussion of resources, describing what a strategy will cost, the sources and types of resources needed, and where those resources should be targeted. The 2011 national strategy discusses the consumer education resources that the federal government makes available to consumers, and it sets building public awareness of these resources as a goal. However, the 2011 strategy still does not address the level and type of resources needed to implement the strategy, nor does it review the budgetary resources available to federal agencies for financial literacy efforts and how they might best be allocated. We have noted in the past that the governance structure of the commission presents challenges in addressing resource issues: it relies on the consensus of more than 20 federal agencies, has no independent budget, and has no legal authority to compel member agencies to take However, the commission does have the ability to at least any action.identify resource needs and make recommendations or provide guidance on how Congress or federal agencies might allocate scarce federal financial literacy resources for maximum benefit. Without a clear description of resource needs, policymakers lack information to help direct the strategy’s implementation, and without recommendations on resource allocations, policymakers lack information to help ensure the most efficient and effective use of federal funds. Additionally, addressing resource needs and allocations in the commission’s national strategy would facilitate its statutory responsibility, discussed earlier, to propose means of eliminating overlap and duplication among federal financial literacy activities. Most federal financial literacy activities include an evaluation component, but variation in the types of activities and the methods of evaluation create challenges in comparing results across programs. As we reported in June 2011, relatively few evidence-based evaluations of financial literacy programs have been conducted, limiting what is known about which specific methods and strategies—and which federal financial literacy activities—are most effective. Several federal agencies have efforts under way seeking to determine the most effective approaches and programs. The wide range of federal financial literacy programs and activities and their evaluation metrics and methods, makes it difficult to systematically assess overall effectiveness or compare results across programs. Among the 20 significant federal financial literacy and housing counseling programs that we reviewed, we found that nearly all had assessed or measured their activities in some manner and, where feasible, many had undertaken some method of seeking to measure outcomes. Some of these evaluations sought to assess the effect of the program on the actual behavior of program participants and some assessed the effect of the program on knowledge, attitudes, or anticipated behavior. As we have reported in the past, in general the ultimate goal of financial education is to favorably affect consumer behavior, such as to promote improved saving and spending habits and wise use of credit. As such, financial literacy program evaluations are most reliable and effective when they measure the programs’ impact on consumers’ behavior. While there is fairly extensive literature on financial literacy in general, relatively few evaluations of financial literacy programs have been published that use empirical evidence to measure a program’s impact on the participants’ behavior. In addition, there are many significant challenges to rigorous and definitive evaluations of financial literacy programs. Outcome-based evaluation can be expensive and methodologically challenging, particularly long-term evaluation using a controlled experimental methodology, which can be especially time and labor intensive. As well, measuring a change in participant behavior is much more difficult than measuring a gain in knowledge, which can often be captured through a simple exam at program completion. Some financial literacy programs and activities, such as those using broadcast media to disseminate information, may also simply not be well-suited to outcome-based evaluation because the program has no direct contact with the intended audience. Further, given the many variables that can affect consumer behavior and decision making, ascribing long-term changes to a particular program is difficult. In addition, some program activities, such as posting a webpage, may be too small in scope to warrant conducting an outcome evaluation study, so tracking output measures—such as the number of individuals served or the volume of materials distributed—may be the only feasible option. One academic review of financial literacy evaluations found that the majority of financial literacy programs it reviewed measured only program outputs. Among the federal financial literacy programs and activities we reviewed, we identified a number of cases in which evaluation included at least some assessment of the effect on consumer behavior of activities operated or funded by federal agencies: National Foreclosure Mitigation Counseling Program. NeighborWorks contracted with the Urban Institute for a study resulting in a series of reports, the most recent of which was published in December 2011, which evaluated program outcomes of the federally funded National Foreclosure Mitigation Counseling program. The study found that among homeowners who received loan modifications, those who received counseling under the program were more likely to avoid entering foreclosure, successfully cure existing foreclosures, or obtain favorable loan modifications than those who did not receive the counseling. U.S. Army personal financial management training. In 2009, staff at the Board of Governors of the Federal Reserve System conducted a study of a U.S. Army personal financial management training, which included a 2-day financial education course taught by college staff for young servicemembers enlisted at a Texas army base. Participants were surveyed on their financial behaviors 6 months after completing the course and compared with a control group of soldiers who did not take the course. After controlling for other factors, the study found that the financial education course did not have a significant effect on most of the soldiers’ financial behaviors, such as comparison shopping, saving, and paying bills on time. Money Smart. FDIC collaborated in an independent evaluation of the Money Smart program in 2003 that measured its effectiveness on a sample of adult program participants who did not have accounts at banks or other mainstream financial institutions. The study found that 80 percent of those who completed Money Smart said they intended to open a bank account, although it did not collect data on whether they actually did so. A second study conducted by FDIC in 2007 surveyed individuals prior and subsequent to their participation in the program and also followed up by telephone 6 to 12 months after their final class. It found that participants were more likely to open deposit accounts, save money in a mainstream deposit product, use and adhere to a budget, and experience greater confidence in their financial abilities. However, this study did not have the benefit of a control group—that is, it did not measure participants in the program against a comparison group that did not participate in the program. FDIC is currently evaluating the effect of Money Smart for Young Adults on the financial knowledge and behavior of young adults (ages 12 to 20). The agency said it expects the evaluation to be completed by the end of 2013. HUD housing counseling. In 2008, HUD published a report that presented a systematic overview of the housing counseling industry and HUD-approved housing counseling providers. In May 2012, two reports were published resulting from the Housing Counseling Outcome Evaluation. The first report looked at a sample of individuals who received foreclosure mitigation counseling from HUD-funded and HUD-approved agencies between August 2009 and December 2009. The findings indicated that 18 months after initiating foreclosure counseling, 56 percent of homeowners were in the home and current on their payments, 28 percent were in the home and behind on their payments, and 16 percent were out of the home. However, the study did not include a control group to compare this group of homeowners to others who had not received foreclosure counseling. The second report examined prepurchase counseling and found that 35 percent of the study participants had become homeowners 18 months after seeking prepurchase counseling; this study also did not include a control group. HUD is in the process of conducting an additional prepurchase counseling demonstration and impact evaluation that will track up to 6,000 individuals to examine the effectiveness of different housing counseling delivery methods compared to a control group of individuals not receiving counseling. Data collection is expected to begin around September 2012, and an initial report is expected by May 2014. Financial Literacy Research Consortium. In 2009 the Social Security Administration established a Financial Literacy Research Consortium that funded 63 research projects at three academic centers on a range of consumer financial behavior and retirement savings issues. According to agency staff, 11 of these projects included evaluations of the effectiveness of interventions designed to improve consumer financial literacy. For example, one study funded through the consortium at the University of Wisconsin found that a 5-hour online financial education module led to modest increases in knowledge of financial issues, but no changes in self-reported financial behaviors 3 months later. Another project has randomly assigned 600 homebuyers to varying combinations of financial planning interventions to be completed during the first year after home purchase. The project is ongoing, and evaluation of the effectiveness of the interventions will be conducted in subsequent years. U.S. Department of Agriculture Family and Consumer Economics programs. The Department of Agriculture encourages land-grant institutions receiving grants for financial literacy activities to conduct some form of evaluation, and some grantees have sought to evaluate program outcomes. For example, researchers at Ohio State University examined the outcomes of a youth curriculum designed to enhance money management skills. Three months after the completion of the program, more than 80 percent of students in 6th-12th grades reported improvements in the extent to which they repaid money on time, set aside money for the future, and compared prices. Financial Education for College Access and Success Program. In 2010, the U.S. Department of Education’s Financial Education for College Access and Success Program provided a grant to the Tennessee Department of Education to measure the program’s effect on student knowledge, attitudes, and behaviors, including rates of financial aid form completion, college enrollment, decisions regarding financial aid, and use of financial products and services. Agency officials said that the study will also measure the effect of the project on the knowledge, attitudes, and instructional skills of participating teachers. Results of the study were not available as of May 2012. Financial Education and Counseling Pilot Program. Treasury requires the homebuyer counseling organizations that receive program grants to periodically report on the results of two output goals (numbers served and hours of service provided) and three outcome goals chosen by the grantees (such as changes in participant savings, credit scores, or debt). As of April 2012, limited information was available about the program’s impact because grantees had provided outcome data no earlier than 2011, while some of the desired outcomes of their programs can take years to realize. Wi$eUp. As of 2010, more than 19,000 individuals had participated in Wi$eUp’s eight-module financial education curriculum. The program tracks the percentage of participants who, as a result of their participation, reduced their debt and increased their savings or investments. Individuals complete pre- and postassessments for each module and are asked to complete a 3-month follow-up assessment to determine the financial changes they have made. Sixty-nine percent of participants in programs conducted in 2009 by Texas A&M’s AgriLife Extension reported reducing their debt by a median of $500 since taking the Wi$eUp course, and 62 percent reported increasing their savings or investments. Some agencies that we reviewed, while not assessing program effect on participant behavior, have reported on the effect on participant knowledge or attitudes or have future plans for evaluating behavior: Excellence in Economic Education Program. In 2009, program subgrantees gave standardized tests to 6,040 middle and high school students and 894 teachers shortly after they had completed the economics and personal finance instructional activities of the Excellence in Economic Education Program. Fifty-eight percent of students participating in projects funded through the program scored proficient on standardized tests of economics, personal finance, or both, compared to their target goal of 70 percent. In addition, 82 percent of teachers participating in the projects showed a significant increase in their scores on a standardized measure of economic content knowledge, as compared to the target goal of 70 percent. Federal Reserve System. Staff of the Board of Governors of the Federal Reserve System told us that the board does not conduct assessments of its financial literacy activities. However, some regional Federal Reserve Banks—which are part of the system but are not themselves federal agencies—do assess their own financial literacy activities. For example, the Federal Reserve Bank of Atlanta, in partnership with the Federal Reserve Bank of St. Louis, used third- party experts to conduct a 2-year assessment of the effectiveness of their financial literacy programs and materials, as well as to design and test tools for measuring knowledge gains and behavior changes. DOD Family Support Centers. DOD is in the second phase of the Military Family Life Project, a longitudinal department-wide survey of 40,000 married active-duty servicemembers and 100,000 military spouses designed to capture the long-term impact of deployment on families and to improve the support provided to them. According to DOD staff, one purpose of the study is to assess the financial readiness of servicemembers. In addition, DOD staff told us that as part of a larger evaluation effort of its family support programs, DOD is collaborating with a team of researchers from Pennsylvania State University to develop outcome measures for the department’s financial readiness campaign and the services of its personal finance counselors. While the outcomes to be measured are still being determined, they may include changes in servicemembers’ financial knowledge and behaviors, the staff said. Consumer Financial Protection Bureau. CFPB’s financial literacy efforts have not been in place long enough for evaluation, but staff told us that evaluation will be a key component of its financial literacy activities and, as discussed later in this report, the bureau’s Office of Financial Education contracted with a third party with specialized expertise to help assess the effectiveness of financial literacy programs. Outcome-based evaluation is not always well suited for some financial literacy efforts, such as those that use mass media or distribute informational materials broadly. As such, several federal financial literacy programs that we reviewed collect information largely on output measures, such as number of individuals served or the volume of materials distributed. In some instances, the programs also measure the degree to which customers are satisfied with the service provided. Federal Trade Commission. FTC’s Division of Consumer and Business Education tracks its financial literacy activities based on materials distributed and webpages accessed by consumers and businesses. It reported that in 2010 it distributed more than 17 million publications and its consumer and business education websites were accessed more than 26 million times. Office of the Comptroller of the Currency. The agency collects data on the number of website hits, media placements, audience reach, and the dollar value of donated air time for its public service announcements. In fiscal year 2011, it ran four media campaigns related to financial literacy, which included print and radio features in English and Spanish that appeared 14,079 times in 44 states. Its Consumer Education websites received 699,904 visits. SEC Office of Investor Education and Advocacy. SEC measures the number of investors its education efforts reach, which was 17.8 million in fiscal year 2010. SEC staff told us they are planning a future evaluation that will include, among other things, customer satisfaction with usefulness of investor education programs and materials. In addition, the Dodd-Frank Act directed SEC to submit by July 21, 2012, a study of retail investors’ financial literacy, which must identify “the most effective existing private and public efforts to educate investors.” National Education and Resource Center on Women and Retirement Planning. Staff at the Department of Health and Human Services told us they had not evaluated the program, but that the nonprofit administering the program had distributed more than 3,000 copies of publications and training materials available at conferences and workshops directed to the financial services industry, women’s groups, advocacy groups, and senior centers. Treasury’s Office of Financial Access, Financial Education, and Consumer Protection. The office collects participation statistics for its National Financial Capability Challenge, which provides teaching resources and encouragement and tests high school students on personal finance topics, and reported 84,372 students and 2,517 educators participating in 2011. The program also collects and publicly reports on average scores (by state and nationally), perfect scores, and students in the top 20 percent of scores nationally and by state. Saving Matters Retirement Savings Education Campaign. The Department of Labor conducts surveys at the program’s seminars and webcasts as part of an in-house evaluation process. The evaluations, conducted with the assistance of the Gallup Organization, assess the number of participants reached by the program, usefulness of the program, and satisfaction of participants, with a goal of an 85 percent satisfaction rate on its seminars, workshops, and webcasts. The department also tracks attendance at these events, the distribution of its publications, and the use of interactive online tools, videos, and webcast archives. As discussed previously, several federal financial literacy programs— such as Money Smart for Young Adults, HUD’s Housing Counseling Assistance Program, and the DOD Financial Readiness Campaign—are in the early stages of significant evaluations that may provide information about program effectiveness in the future. In addition to those evaluations of individual agency efforts, certain other federal efforts are under way that apply across agencies and seek more broadly to understand the most effective methods and strategies for improving financial literacy. Financial Literacy and Education Commission. The 2011 national strategy and its implementation plan set as one of its four goals identifying, enhancing, and sharing effective practices. As previously discussed, Treasury staff told us that the commission has begun to establish a clearinghouse of evidence-based research and evaluation studies, current financial topics and trends of interest to consumers, innovative approaches, and best practices. According to Treasury staff, the clearinghouse will be available through the MyMoney.gov website and will have links to research and data sets from federally supported financial literacy projects. The clearinghouse is being developed by a contractor but will be maintained by Treasury and is expected to be available around September 2012. In addition, the commission’s Research and Evaluation Working Group is charged with encouraging multidisciplinary research and sharing effective practices among federal agencies. In May 2012, the working group released a report on research questions and priorities that is intended to inform federal agencies, researchers, and others on the most important questions facing the field of financial literacy and to help make best use of limited research dollars. CFPB’s Office of Financial Education. CFPB’s Office of Financial Education recently contracted with The Urban Institute for a financial education program evaluation project, which seeks to increase understanding of which interventions can improve financial decision- making skills in consumers. The effectiveness of selected financial education programs will be evaluated using a data collection tool and will be tested against a control group. Staff told us they intend to use the insights from the study to provide direction to practitioners about how to design and support effective financial capability and money confidence programs. A report is expected to be issued in the spring of 2014. In addition, CFPB’s Office of Financial Education has collaborated with its Office of Research to develop metrics for financial education, according to agency staff. Office of Personnel Management. As part of its Retirement Readiness NOW program, the Office of Personnel Management is developing a rating system to determine which federal agencies are most effective in providing financial literacy and retirement education to the civilian labor force. According to agency staff, the ranking system is intended to help hold federal agencies accountable for their retirement education plans and strategies. Treasury’s Office of Financial Access, Financial Education, and Consumer Protection. This office has contracted out a research project assessing financial capability outcomes for both youth and adults, with results expected by the end of 2012. The office will test the hypothesis that increased financial capability—including financial information and education and access to simple, low-cost, transaction and savings products—will enhance the financial stability and well- being of low- and moderate-income individuals and households. Federal financial literacy and housing counseling resources are spread across many federal agencies, the result of both legislation and programs evolving to address a variety of populations or topics. While we uncovered no duplication, some agencies or programs do have overlapping goals and activities, which raises the risk of inefficiency and underscores the importance of coordination. The creation of CFPB adds a new player to the mix. The agency will play a primary federal role in addressing financial literacy, yet some of its responsibilities overlap with those of other federal agencies. Effective collaboration among agencies entails defining and agreeing on respective roles and responsibilities and organizing collective efforts. CFPB appears to be making progress thus far in coordinating with federal agencies that have overlapping financial literacy responsibilities, but ensuring clear delineation of respective roles and responsibilities will also be essential to ensure efficiency. Moreover, the creation of CFPB may signal an opportunity for reconsidering how the federal government’s financial literacy efforts are organized. In particular, some consolidation of these efforts could help ensure the most efficient and effective use of federal financial literacy resources. While our February 2012 report stated that we expected to suggest that Congress consider options for such consolidation, the Financial Literacy and Education Commission is better positioned to identify possible options and this would be consistent with the commission’s statutory responsibility to propose means of eliminating overlap and duplication among federal financial literacy activities. Overall, coordination among federal agencies with regard to financial literacy has improved in recent years, and the commission has played a critical role in this. The commission’s 2011 national strategy includes some elements that may be useful in guiding federal financial literacy efforts, but it could do more to identify the resources needed to implement the strategy and how federal resources might best be allocated among programs and agencies, characteristics we have found to be desirable for any national strategy. The commission faces the constraints of lacking its own budget or legal authority over member agencies to take any action, but, even so, it has the ability to provide recommendations or guidance to Congress or federal agencies. Without a clear discussion of resource needs and where resources should be targeted, policymakers lack information to help direct the strategy’s implementation and help ensure efficient use of funds. We found that nearly all significant federal financial literacy programs that we reviewed had assessed or measured their activities in some manner and many had undertaken some method of seeking to measure outcomes. While some measured the effect on participant behavior, often they assessed changes in participant knowledge or tracked output measures, such as the number of consumers reached. There is only limited knowledge about which federal financial literacy programs are most effective in achieving the key goal of improving consumer behavior, in large part because of the cost and difficulty of measuring these outcomes. Rigorous outcome-based evaluation is not necessarily practical or appropriate for every program, but its promotion and use, where feasible, is important to help Congress and federal agencies focus financial literacy resources on the most effective approaches and activities. In our February 2012 report, we stated that we expected to recommend that Congress consider requiring federal agencies to evaluate the effectiveness of their financial literacy efforts. However, we have found that the new initiatives that CFPB, Treasury, and the Financial Literacy and Education Commission have under way to assess effectiveness and identify best practices are positive steps in this direction. As a result, based on these ongoing efforts, we no longer believe that this recommendation is necessary at this time. We recommend that as part of its ongoing coordination efforts, the Consumer Financial Protection Bureau take steps to help ensure clear delineation of the respective roles and responsibilities between itself and other federal agencies that have overlapping financial literacy responsibilities. To help ensure effective and efficient use of federal financial literacy resources, we also recommend that the Secretary of the Treasury and the Director of the Consumer Financial Protection Bureau, in their capacity as Chair and Vice Chair of the Financial Literacy and Education Commission, and in concert with other agency representatives of the commission: identify for federal agencies and Congress options for consolidating federal financial literacy efforts into the activities and agencies that are best suited or most effective, and revise the commission’s national strategy to incorporate clear recommendations on the allocation of federal financial literacy resources across programs and agencies. We provided a draft of this report to the Departments of Agriculture, Defense, Education, Health and Human Services, Housing and Urban Development, Labor, and the Treasury, as well as to the Board of Governors of the Federal Reserve System, Consumer Financial Protection Bureau, Federal Deposit Insurance Corporation, Federal Trade Commission, Office of the Comptroller of the Currency, Office of Personnel Management, Securities and Exchange Commission, and the Social Security Administration. We incorporated technical comments provided by these agencies as appropriate. In addition, CFPB, the Department of Health and Human Services, and Treasury provided written responses that are reproduced in appendices III, IV, and V, respectively. In its response, CFPB neither agreed nor disagreed with the recommendations addressed to it, but it highlighted steps that its Offices of Financial Education, Servicemember Affairs, and Financial Protection for Older Americans are taking to delineate roles and responsibilities, improve coordination, and avoid duplication with other federal agencies. CFPB also noted that it is committed to ensuring that its activities are informed by data and analytics. For example, it cited a project it has launched that uses rigorous quantitative methodologies to assess the effectiveness of several existing financial education programs and provide direction to practitioners about how to design and support effective programs on improving consumers’ financial capability and confidence about money. Treasury said that it agreed with our recommendations to the Financial Literacy and Education Commission related to identifying options for consolidation and making recommendations on the allocation of federal financial literacy resources. Treasury noted that the department has already begun work with other members of the commission to define specific and measurable objectives that will help agencies assess the impact of their financial capability activities, which will provide a framework for any resource allocation recommendations the commission may have. The Department of Health and Human Services said in its response that it disagreed that its National Education and Resource Center on Women and Retirement Planning overlapped with the Department of Labor’s Wi$eUp program because the two programs have differing methodologies, approaches, and target populations. We acknowledge the differences between the two programs in our report. However, the definition for overlap presented in this report is “multiple agencies or programs with similar goals and activities,” and we believe that this accurately applies to these two programs, both of which are financial literacy programs designed for adult women. We are sending copies of this report to the appropriate congressional committees and to the heads of agencies that comprise the Financial Literacy and Education Commission. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions are listed in appendix VI. Our objectives were to address (1) what is known about the cost of federal financial literacy activities; (2) the extent and consequences of overlap and fragmentation among financial literacy activities; (3) what the federal government is doing to coordinate its financial literacy activities; and (4) what is known about the effectiveness of federal financial literacy activities. For the purposes of our analysis, we considered duplication to occur when two or more agencies or programs are engaged in the same activities and provide the same services to the same beneficiaries. Overlap refers to when multiple agencies or programs have similar goals, engage in similar activities or strategies to achieve them, or target similar users. Fragmentation refers to circumstances in which more than one federal agency is involved in the same broad area of national need. Our report focuses largely on federal programs or activities that were relatively comprehensive in scope or scale and included financial literacy as a key component rather than a tangential goal. We generally excluded from our review programs or activities for which financial literacy was only a minimal component; that provided financial information related to the administration of the program itself rather than information aimed at increasing the beneficiaries’ financial literacy and comprehension more generally; that were purely internal to the agency; or that provided individualized financial services or advice rather than education. Using these criteria, we identified 16 significant financial literacy programs and 4 significant housing counseling programs in operation in fiscal year 2010. To address our first objective, we collected and reviewed the President’s Budget for fiscal years 2010, 2012, and 2013; budget justifications, as needed; congressional appropriations; and other sources that included cost information. For many federal agencies, financial literacy activities were not organized as separate budget line items or cost centers. In these cases, we asked agency staff to estimate the portion of program costs that could be attributed to financial literacy activities for fiscal year 2010, which is the year for which we reported costs. This typically entailed estimating the cost of that portion of staff time devoted to financial literacy, as well as the cost of contracts, printing, or other resources related to financial literacy activities. Because the methods for estimating costs varied, these costs may not be fully comparable across agencies. To assess the reliability of these estimates, we interviewed agency staff about their cost estimation methodology, what their estimate included, and what assumptions they used in making the estimate. Although costs may not be comparable across the agencies because agencies used differing methodologies, we determined that the data are reliable for the purposes of generally estimating federal dollars spent on financial literacy activities. To address our second and third objectives, we reviewed a 2009 report by the RAND Corporation that cataloged federal financial literacy efforts; reports from the President’s Advisory Council on Financial Capability; the national strategies and supporting documents of the Financial Literacy and Education Commission; and other reports as appropriate. We also reviewed the commission’s MyMoney.gov website and the websites of individual federal agencies related to financial literacy. In addition, we reviewed federal agency strategic plans; performance and accountability reports; budget justifications; memorandums of understanding between agencies or with nonfederal entities; and laws related to financial literacy activities or programs. Further, to assess the extent of overlap or duplication, we collected and analyzed characteristics of federal financial literacy programs and identified similarities and differences among programs’ purposes, subject matter content, targeted populations, and delivery methods. We assessed the commission’s 2011 National Strategy for Financial Literacy, in part, by benchmarking it against our prior work that identified the general characteristics of an effective national strategy.Those recommended characteristics for national strategies had been developed by reviewing several sources of information, which included the Government Performance and Results Act of 1993; legislative and executive branch guidance for national strategies; general literature on strategic planning and performance; and our prior work on issues related to planning, integration, implementation, and other related subjects. To determine what is known about the effectiveness of federal financial literacy activities, we collected evaluations, as well as any available information on the outputs or outcomes of these activities. As applicable, we reviewed output data such as information on numbers of program participants or consumers reached, website visits, and copies of publications or other materials distributed that were available through a variety of sources. For example, as available, we reviewed results of surveys of customer satisfaction, attitudes, or intention to change behavior, and tests that measured changes in program participants’ knowledge. In addition, we reviewed information on program effect that appeared in agencies’ strategic plans and performance and accountability reports. We also reviewed the 2009 RAND report, which included self- reported information from federal agencies on methods they have used to evaluate their financial literacy programs, and we updated this information as necessary through interviews with agency staff. In addition, we collected any available studies and evaluations that had been conducted on the outcomes of federal financial literacy activities, which included evaluations conducted by the agencies themselves or by external researchers. Each of the studies and evaluations cited in our report was reviewed for methodological reliability and determined to be sufficiently reliable for our purposes. Finally, to address all four of our objectives, we interviewed staff who address financial literacy issues at 17 federal agencies that we had identified in prior work as potentially having significant involvement in financial literacy—the Board of Governors of the Federal Reserve System; Consumer Financial Protection Bureau; Departments of Agriculture, Defense, Education, Health and Human Services, Housing and Urban Development, Labor, and Treasury; Federal Deposit Insurance Corporation; Federal Trade Commission; Internal Revenue Service; Office of the Comptroller of the Currency; Office of Personnel Management; Securities and Exchange Commission; Social Security Administration; and the U.S. Mint. We also interviewed staff at NeighborWorks America (a federally chartered nonprofit corporation) and representatives of the National Financial Education Network of State and Local Governments, the President’s Advisory Council on Financial Capability, and two nonprofit organizations, the American Savings Education Council and the National Endowment for Financial Education. We conducted this performance audit from May 2011 to July 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In 2009, the Departments of the Treasury and Education asked federal agencies to self-identify their financial literacy efforts, which resulted in a 2009 report by the RAND Corporation that identified 56 federal financial literacy programs among 20 agencies. We reported these results in a 2011 report, but our subsequent analysis of these 56 programs found a high degree of inconsistency in how different agencies defined financial literacy programs or efforts and whether they counted related efforts as one or multiple programs. For the purposes of our current report, we developed criteria for identifying significant federal financial literacy and housing counseling activities and programs. We defined such activities or programs as those that were relatively comprehensive in scope or scale and for which financial literacy or housing counseling was a key objective rather than a tangential goal. As appropriate, we defined a related set of activities (such as a series of webpages from one agency) as a single program. In addition, we excluded programs or activities (1) for which financial literacy was only a minimal component; (2) that provided financial information related to the administration of the program itself rather that information aimed at increasing the beneficiaries’ financial literacy and comprehension more generally; (3) that were purely internal to the agency, such as information provided to agency employees on their employment and retirement benefits; and (4) that represented individualized services or advice. We included as federal programs those of NeighborWorks America, a government-chartered, nonprofit corporation that receives federal funding for housing counseling, including through an annual appropriation from Congress. Finally, the RAND report was based on programs and activities in place in 2009, while our list reflects programs and activities in place during fiscal year 2010. In addition to the contact named above, Jason Bromberg (Assistant Director), Kimberly Cutright, Mary Coyle, Jonathan Kucskar, Roberto Piñero, Rhonda Rose, Jennifer Schwartz, and Andrew Stavisky made key contributions to this report.
Financial literacy—the ability to use knowledge and skills to manage financial resources effectively—plays an important role in helping to ensure the financial health and stability of individuals and families. Federal agencies promote financial literacy through activities including print and online materials, broadcast media, individual counseling, and classroom instruction. In response to a mandate requiring GAO to identify duplicative government programs and activities, this report addresses (1) the cost of federal financial literacy activities; (2) the extent of their overlap and fragmentation; (3) the federal government’s coordination of these activities; and (4) what is known about their effectiveness. GAO reviewed agency budget documents, strategic plans, performance reports, websites, and other materials, and interviewed representatives of federal agencies and other organizations. The federal government spent about $68 million on 15 of the 16 financial literacy programs that were comprehensive in scope or scale in fiscal year 2010; cost data were not available for the Consumer Financial Protection Bureau (CFPB), which was created that year. In addition, about $137 million in federal funding in four other major programs was directed to housing counseling, which can include elements of financial education. Since fiscal year 2010, at least four of these programs have been defunded and CFPB has received resources to fund its financial literacy activities. Federal financial literacy and housing counseling activities are spread across multiple agencies and programs. GAO has not identified duplication—programs providing the same activities and services to the same beneficiaries—but has found overlap—multiple programs with similar goals and activities—in areas such as housing counseling and the financial education of youth. Further, CFPB was charged with some financial education duties that overlap with those of other federal agencies, making it essential that their respective roles and responsibilities be clearly delineated to ensure efficient use of resources. Moreover, CFPB’s creation may signal an opportunity for consolidating some federal financial literacy efforts, which would be consistent with federal goals of reorganizing and consolidating federal agencies to reduce the number of overlapping government programs. Federal agencies have made progress in recent years in coordinating their financial literacy activities and collaborating with nonfederal entities, in large part due to the efforts of the federal multiagency Financial Literacy and Education Commission. The commission’s 2011 national strategy includes some useful elements—such as plans to coordinate interagency communication, improve strategic partnerships, and promote evaluation. However, it does not recommend or provide guidance on the appropriate allocation of federal resources among programs and agencies, which GAO has found to be desirable in a national strategy. While the commission’s governance structure presents challenges in addressing resource issues, without a clear discussion of resource needs and where resources should be targeted, policymakers lack information to help direct the strategy’s implementation and help ensure efficient use of funds. The wide range of federal financial literacy activities and evaluation methods makes it difficult to systematically assess overall effectiveness or compare results across programs. Among the federal financial literacy programs that we reviewed, most included some evaluation component. Some measured the effect on participant behavior and others assessed changes in participant knowledge or tracked output measures, such as the number of consumers reached. Rigorous evaluation measuring behavior change is costly and methodologically challenging and may not be practical for all types of activities. However, CFPB and other federal entities have new efforts under way that seek to determine the most effective approaches and programs, which GAO believes to be positive steps toward helping ensure the best and most efficient use of federal financial literacy resources. GAO recommends that CFPB clearly delineate with other agencies respective roles and responsibilities, and that the Financial Literacy and Education Commission identify options for consolidating federal financial literacy efforts and address the allocation of federal resources in its national strategy. CFPB neither agreed nor disagreed with these recommendations and the Department of the Treasury agreed with the recommendations directed to the commission.
As we have reported in the past, the impact of invasive species in the United States is widespread, and their consequences for the economy and the environment are profound. They affect people’s livelihoods and pose a significant risk to industries such as agriculture, ranching, and fisheries. The cost to control invasive species and the cost of damages they inflict, or could inflict, on property or natural resources are estimated to total billions of dollars annually. For example, according to the U.S. Department of Agriculture (USDA), the Formosan termite causes at least $1 billion annually in damages and control costs in 11 states (in 2001 dollars). USDA also estimates that, if not managed, fruit flies could cause more than $1.8 billion in damage each year (in 2001 dollars). According to the National Invasive Species Council, hundreds, and perhaps thousands, of nonnative species have established populations in the United States. Invasive species continue to be introduced in new locations, with recent examples including the northern snakehead fish in Maryland and the emerald ash borer in Michigan. Many scientists believe that invasive species are a significant threat to biodiversity and are major or contributing causes of population declines for almost half the endangered species in the United States. Invasive species can alter entire ecosystems by disrupting food chains, preying on critical native species such as pollinators, increasing the frequency of fires, or—as in the case of some plants—simply overshadowing and smothering native plants. Invasive species may arrive unintentionally as contaminants of bulk commodities such as food, in packing materials and shipping containers, or in ships’ ballast water. Others may be introduced intentionally; kudzu, for example—a rapidly growing invasive vine that thrives in the southeastern United States—was intentionally introduced from Japan as an ornamental plant and was used by USDA in the 1930s to control soil erosion. Other invasive species are imported as crops, livestock, aquaculture species, or pets, and later escape or are released into the environment. (See fig. 1 for details on the mute swan, intentionally introduced to adorn parks and private bird collections.) Not all nonnative species, however, cause harm. Many nonnative species, such as cattle, wheat, soybeans, many fruits, and ornamental plants (such as tulips and chrysanthemums), have been largely beneficial and their propagation controllable. Various terms have been applied to invasive species, including “alien,” “exotic,” “nonindigenous,” and “nonnative.” In this report, we use the definition provided by Executive Order 13112, which states that an invasive species is an alien species whose introduction does or is likely to cause economic or environmental harm or harm to human health. An alien species is one that is not native to a particular ecosystem. (We used this definition, as well as other factors, in selecting species to profile in this report.) More than 20 federal agencies in 10 departments—including USDA, Commerce, Defense, and the Interior—have responsibility for some aspect of invasive species management. (See fig. 2.) States also have a significant management role, but the extent of their involvement varies considerably. USDA has the largest federal role because of its responsibility to (1) conduct port-of-entry inspections and quarantine goods coming into the country, (2) manage more than 190 million acres of national forests and grasslands, (3) conduct research, and (4) provide technical assistance to the private sector and in large agricultural pest control projects. We reported that in fiscal year 2000, seven of the departments obligated more than $624 million for activities related to invasive species management. According to the council, appropriations to those departments for such activities increased in fiscal year 2001 to approximately $1.05 billion, of which USDA received almost $975 million. In February 1999, invasive species prevention and control efforts received heightened attention with the issuance of Executive Order 13112. The executive order established the National Invasive Species Council, which is now made up of the secretaries and administrators of 10 federal departments and agencies. The executive order required the Secretary of the Interior to establish an advisory committee to provide information and advice to the council. Accordingly, in November 1999, the secretary established the Invasive Species Advisory Committee, composed of 32 nonfederal members representing a range of interests relevant to invasive species, including academia, environmental organizations, industry, trade associations, Native American tribes, and state government. The executive order also required that the council develop a national invasive species management plan using a public process and revise it biennially. Among other things, the executive order called for the plan to (1) recommend performance-oriented goals and objectives and specific measures of success, (2) recommend measures to minimize the risk of new introductions of invasive species, and (3) review existing and prospective approaches and authorities for preventing the introduction and spread of invasive species. The council and its staff worked with members of the advisory committee and other interested parties to produce draft management plans for public comment. In January 2001, the council issued the final plan, which identifies nine categories of planned actions to aid in the prevention, control, and management of invasive species in an effort to minimize their economic, environmental, and human health impacts. (See fig. 3.) The council’s plan calls for member departments to implement a total of 86 discrete actions, each with an associated due date or start date. Examples of the actions include establishing and coordinating long- and short-term capacities for basic and applied research on invasive species and gathering and disseminating information on the council’s Web site. The United States and Canada have a mutual interest in limiting the introduction or spread of invasive species across their borders. The two countries share more than 5,500 miles of terrestrial and aquatic border that provide potential pathways for invasive species. Each country is the other’s largest trading partner, sending and receiving a variety of goods, such as crops, livestock, wood, and horticultural products, that can harbor invasive species. Therefore, species that enter one of the two countries have opportunities to spread into the other. The Great Lakes—a shared U.S. and Canadian resource—have been subject to invasion by nonnative species since the settlement of the region. At least 160 nonnative aquatic organisms have become established in the lakes since the 1800s, most of which have come from Europe, Asia, and the Atlantic coast. More than one-third of the organisms have been introduced in the past 30 years, a trend coinciding with the opening of the St. Lawrence Seaway in 1959 and other changes in ship operations. Ballast water in ships is considered a major pathway for the transfer of invasive aquatic organisms to the Great Lakes. Ballast is essential to the safe operation of ships because it enables them to maintain their stability and control how high or low they ride in the water. Ships take on or discharge ballast water over the course of a voyage to counteract the loading or unloading of cargo, and in response to sea conditions. The ballast that ships pump aboard in ports and harbors may be fresh, brackish, or salt water. These waters may contain various organisms that could then be carried to other ports around the world where they might be discharged and survive. Canada adopted voluntary ballast water management guidelines in 1989 in response to the 1988 discovery of nonnative zebra mussels in Lake St. Clair. The Canadian guidelines were superceded by new guidelines in 2000 and encourage ships’ masters entering the Great Lakes and other waters under Canadian jurisdiction to employ management practices— such as exchanging ballast water in the open ocean—to minimize the probability of future introductions of harmful aquatic organisms. They also direct ships’ masters to provide ballast water details to Canadian authorities. The United States followed the Canadian lead and passed the Nonindigenous Aquatic Nuisance Prevention and Control Act of 1990. This legislation directed the Secretary of Transportation to issue voluntary ballast water guidelines and regulations for the Great Lakes. Joint United States and Canadian voluntary guidelines, which closely tracked the 1989 Canadian guidelines, went into effect in March 1991. The U.S. Coast Guard issued the first set of mandatory ballast water regulations for the Great Lakes in April 1993. The National Invasive Species Act of 1996 amended the 1990 act and required the Secretary of Transportation to issue voluntary ballast water guidelines for the rest of the United States. The scope of existing analyses of the economic impact of invasive species in the United States range from narrow to comprehensive. Narrowly focused analyses include estimates of past damages that are limited to commercial activities such as agricultural crop production and simple accountings of the money spent to combat a particular invasive species. These estimates typically do not include the economic impact of these species on natural ecosystems, the expected costs and benefits of alternative measures for preventing their entry or controlling their spread, or the impacts of possible invasions by other species in the future. On the other hand, more comprehensive—and rare—analyses are those that examine the past and prospective economic impact of invasive species to both commercial activities and natural ecosystems and the potential costs of preventing or controlling them. Few analyses have been done that examine the likelihood that new species will invade new locations and that estimate their costs. Although the estimates we reviewed may have served the purpose for which they were intended, the narrow scope of many of them may limit their usefulness to decision makers formulating federal policies on prevention and control. In general, the more comprehensive the approach used to assess the economic impacts of invasive species, the more likely its usefulness to decision makers for identifying potential invasive species, prioritizing their economic threats, and allocating resources to minimize overall damages. Federal agencies recognize the value of this type of analysis and have recently taken steps to use it more often. According to officials from several agencies, however, efforts to improve economic impact analyses are hampered by a lack of data on invasive species and a lack of economists assigned to assessing their economic impacts on commercial activities and natural ecosystems. The narrow scope of many analyses of the economic impacts of invasive species may limit their usefulness to decision makers developing policies and allocating resources to address the problem. First, many of the analyses we reviewed do not address the economic impact of invasive species on natural area ecosystems. Instead, they reflect the impacts of invasive species on commercial activities such as agricultural and timber production and fisheries. This reflects the fact that most of the management and control of invasive species in the United States has focused on those species that damage agricultural crops and livestock. For example, the Federal Interagency Committee for the Management of Noxious and Exotic Weeds (FICMNEW) studied the economic impact of weeds on the U.S. economy and found the estimated value of losses from invasive weed species to be about $15 billion per year.12, However, the committee reported that the economic impact on most nonagricultural sites was not available. Focusing solely on the impact of invasive species on commercially valuable activities ignores the potential impact of invasive species on ecosystems as a whole, possibly understating the impact of these species. Consistent with that point, according to the Environmental Protection Agency, the true cost of invasive species is underestimated if estimates of damages do not include lost ecosystem services, such as water purification and aesthetic values. Federal Interagency Committee for the Management of Noxious and Exotic Weeds, Invasive Plants: Changing the Landscape of America, Fact Book, Washington, D.C.: FICMNEW, 1998. Second, many of the existing analyses do not fully account for the expected costs and benefits that are associated with different control methods for invasive species. Two frequently cited summations of the aggregate impacts of invasive species in the United States were based on estimates of this type. The first, by the U.S. Office of Technology Assessment (OTA), estimated that by 1991 at least 4,500 nonnative species had become established in the United States, of which about 600 had caused severe harm. The OTA was able to obtain data showing that the economic impact of 79 of these species totaled about $118 billion between 1906 and 1991 and impact included damage to agricultural crops, industrial activities, and human health. The second effort was by researchers at Cornell University who estimated in 1999 that approximately 50,000 nonnative plant and animal species are known to have entered the United States—although not all have established harmful populations—and that the overall cost of the harmful species is about $137 billion annually. However, the estimates that these aggregate studies relied on typically did not include an analysis of whether control measures are desirable given their costs or what the most cost-effective methods for preventing or controlling particular invasive species would be. (Many of the estimates included in these aggregate studies also lack information on the impact of invasive species on natural area ecosystems.) It is not unusual for analyses to lack information for the assessment of the cost-effectiveness of prevention and control measures. The most complete data on invasive species damages, and prevention and control costs and effectiveness are available for known pests that the USDA has identified as serious threats to agriculture on the basis of past invasions. These include diseases and pests such as the virus that causes foot-and-mouth disease, citrus canker, and the Mediterranean fruit fly. Yet, even for these pests, relatively little is known about the likely success of alternative methods for preventing their entry or controlling their spread. For example, an official in charge of risk analysis for USDA’s Animal and Plant Health Inspection Service (APHIS) told us that there is a general lack of information on the likely success of different measures—short of outright bans on the importation of some products—that could be used to prevent the importation of invasive species into the country. He said that even for a pest such as the one that causes foot-and-mouth disease, for which the potential costs of an outbreak have been studied, data are not available on the cost-effectiveness of many prevention methods. Prevention methods could range from a ban of all products that might carry the disease from all countries known to harbor it to less stringent restrictions that allow more trade but that might provide less protection. For invasive species that have previously entered the United States and caused damages, there is also little information available on the likelihood that they will do so again at particular times and by particular pathways. Even less information of this nature is available for non-agricultural pests. More comprehensive analyses that include such information may help decision makers allocate limited resources among different prevention and control efforts. A third way in which the narrow scope of many estimates may limit their usefulness is that they focus on the impact of species that are known to cause problems but do not provide decision makers with information on the likelihood that new species will invade and cause damage. The typical estimate includes data on the damages already caused by species or the money spent to control them. The OTA and Cornell estimates mentioned above are largely based on these types of estimates. Other examples include USDA’s report that it cost about $26 million between 1996 and 2000 to remove trees infested with Asian long-horned beetle in New York and Illinois and the estimate by North Dakota State University researchers in 1996 that three species of knapweed cause about $48 million per year in damage to Montana’s economy. Data such as these can be used to estimate the continued effects of a species in the same location or the potential effects in a new location. For example, researchers used data on the effects that the European green crab had had on East Coast fisheries to estimate that this invasive species could damage native oyster, clam and crab fisheries on the West Coast by as much as $54 million per year. (See fig. 4 for more information on the European green crab.) However, experts in biological invasions caution that it is difficult to extrapolate from a past invasion event to introductions of new species that have not occurred. According to an official with the Department of the Interior, decision makers need guidance on which potential invasive species pose the greatest threat to the United States and how to best design policies for combating them. Some researchers suggest that the best way of protecting ourselves from invasive species is to try to predict new arrivals of potentially invasive species, study the basic biology of probable new arrivals, and work on biological controls for them as part of a program for early detection and rapid response. One environmental scientist has suggested that one of the best ways to predict the introduction of and damage from species new to the United States is to study recent introductions of species into other countries that have ecosystems similar to those in this country. While USDA and others have done some studies of this type, particularly for agricultural pests, the preponderance of economic analysis has focused on species that have already invaded the United States rather than on new species that could invade in the future. While most of the analyses that we reviewed have limitations in their scope that lessen their usefulness to decision makers, some used a more comprehensive approach. Some analyses accounted more fully for the expected costs and benefits to producers and consumers of different control measures. For example, to further improve analysis of the expected costs and benefits of control measures, the Risk Assessment and Management Committee under the Aquatic Nuisance Species Task Force expanded the scope of existing federal risk assessment processes and methodologies to include the socioeconomic impacts of invasive species. In a case study covering, in part, the effects of importing the Asian black carp, U.S. Geological Survey (USGS) researchers balanced the potential for economic gains from intentionally introducing this species—it eats snails that may harbor parasites in fishponds and zebra mussels in the wild— against the potential for economic and environmental damage if it became established in the wild. Risks were estimated by expert judgment. Based upon the outcome of the assessment, the Aquatic Nuisance Species Task Force decided that establishment of this species would create an unacceptable level of potential harm. The U.S. Fish and Wildlife Service has proposed amending its regulations to add the Asian black carp to a list of injurious fish, crustacean, and mollusk species that are not allowed to be imported into the United States. Researchers also recently discussed how the benefits from integrating risk assessment and benefit cost analysis into the regulatory process can provide decision makers with more information than is available when only a single dimension of information is considered. These two dimensions of information give decision makers an opportunity to evaluate the tradeoffs that they face when they choose among alternative regulatory measures. The researchers addressed the question of the tradeoff between banning the imports of commodities that may harbor invasive species and enjoying the benefits of those commodities. As an example, they analyzed a partial ban on imports of Mexican avocados and found that, based on the assessment of invasion risk alone, the ban seemed to have greater benefits than costs. However, when they incorporated into their analysis the costs to U.S. consumers that the ban would impose in terms of reduced availability of low-cost avocados, they found that less stringent regulations would likely be more desirable than the ban. As another example, the same researchers demonstrated the benefit of integrating benefit cost analysis and risk assessment simultaneously into the evaluation of risk management options for the invasive fungus that causes Karnal bunt disease in wheat. In this case, they illustrated how analyses that estimate invasion risks and costs and benefits for control programs for this species but do not adjust benefit estimates of the control program components for risk may not help decision makers choose control policies with the greatest overall benefit. USDA had estimated that the Karnal bunt fungus could cause more than $500 million per year in damages to the U.S. wheat industry by reducing the amount of wheat suitable for export and had adopted a program to control the spread of the fungus. However, researchers found that the USDA’s estimate was incomplete, in part because it focused on reducing the probability of an outbreak of the disease by adopting multiple quarantine options but did not examine whether each option was an economically efficient quarantine policy. When the researchers examined these options individually, they were able to identify the most efficient options, that is, those imposing the least cost on producers. According to these researchers, by not adopting only the most efficient options, the costs of the agency’s program for controlling the spread of the fungus exceeded the program’s benefit. The researchers suggest that failure to look at the expected marginal benefits and costs of various quarantine options may have led to the adoption of an unnecessarily costly quarantine policy. Another way in which some estimates have been more comprehensive is by including an examination of the impact of invasive species on more than just commercial commodities. For example, in estimating the effect of gypsy moth caterpillars on forest trees, researchers estimated that benefits from programs that would slow their spread would be between $1 billion and $4.8 billion in present value, depending on their rate of spread and the control programs adopted, in increased timber production, recreational opportunities, residential and scenic land values, water quality and other amenities, over 25 years. In another example, researchers used an economic model based on property values to estimate damages to lakefront properties in New Hampshire from milfoil, an invasive aquatic weed that causes serious economic, recreational, and ecological damage. Their estimates showed that between 1990 and 1995, property values on milfoil-infested lakes were about 16 percent lower than similar properties on uninfested lakes. According to an official with the Department of Commerce, the state of New Hampshire adopted a program to control this invasive weed on the basis of this study. Finally, some analysts are taking more comprehensive approaches by analyzing the likelihood that species will be introduced, become established, and cause harm in particular geographic areas or via particular pathways. For example, a researcher has built upon earlier USDA work on pest risk assessment to evaluate the likelihood of establishment of Eurasian poplar leaf rust. The researcher combined information on the incidence of the disease and the location of susceptible plant hosts in the United States with data on past invasions of this species in similar ecosystems abroad, to assess the likely danger to geographic areas in the United States. In another example, USDA examined the likelihood that the Eurasian pine shoot beetle would enter and spread via various pathways and which pathways would impose the greatest risk of harm. This beetle emerged as a new and potentially serious pest of timber in the upper midwestern United States in 1992. Potential losses from the beetle were large, and the state of Michigan proposed 25 mitigation measures that would have included large expenditures on pesticide sprays. USDA’s analysis, which included a risk assessment of the likely pathways by which the beetle might spread, showed that 99.8 percent of the risk of spread occurred by one pathway in a 2-week period during the timber’s processing. Using this information, the timber industry took appropriate control measures during this 2-week period to effectively manage the risk at low cost and without the need for regulation. Recent federal actions may help to prompt further improvements in the economic impact analysis available to decision makers. Among other things, Executive Order 13112 calls on federal agencies to prevent the introduction of invasive species, and to detect, respond rapidly to, and control them in a cost-effective and environmentally sound manner. The executive order also directs agencies to determine that the benefits of any actions they take that are likely to cause or promote the introduction or spread of invasive species clearly outweigh the potential harm caused by the species and to take measures to minimize the risk of harm in conjunction with these actions. Implementing the order will thus require agencies to undertake more comprehensive studies of risks, costs, and benefits. In addition, the federal Aquatic Nuisance Species Task Force has developed a process to evaluate the risk of introducing nonnative organisms into a new environment and, if needed, determine the correct management steps to mitigate that risk. The task force has also developed guidelines to provide direction to assist states in the development of their own management plans for aquatic nuisance species. The guidance, formally adopted by the task force in 2000, emphasizes a need for feasible, cost-effective, comprehensive plans that can be developed quickly, and can be used to focus on the most pressing species problems that can be effectively managed. As an example of how these efforts have been used, the U.S. Fish and Wildlife Service, USDA’s Animal and Plant Health Inspection Service, and the National Oceanic and Atmospheric Administration, in conjunction with state authorities, have prevented the spread of the aquatic weed caulerpa in U.S. coastal waters. USDA has also taken recent steps to refine its risk assessment practices. Over the years, in making decisions about allowing the importation of certain agricultural commodities from countries known to harbor potentially serious plant pests, USDA occasionally used analysis that led to partial rather than outright bans of those commodities in recognition of both risks of invasion and the benefits that consumers would obtain from access to that commodity. An impetus for doing more of this type of analysis was international trade agreements that call for the United States and others to use the least restrictive measures to protect against invasive pests. In other words, the trade agreements prohibit countries from imposing outright bans of certain agricultural commodities if biological and economic data show that partial bans would be just as effective. Partly in response to these agreements, USDA’s Animal and Plant Health Inspection Service issued for the first time in August 2001 guidelines for the agency to use when assessing the risks posed by diseases and pests. These guidelines state that risk assessments should consider the probable biological and economic consequences of the entry and establishment of invasive species, as well as the likelihood that those species will enter. However, according to the chief of APHIS Risk Assessment Systems, agency assessments done in the past frequently focused on the likelihood that species will enter and become established and, because of a lack of credible data, were less focused on their biological and economic consequences. Moreover, USDA recently established a task team to improve the ways in which risk assessment is incorporated into the department’s analyses of the economic impacts of invasive species. Agency officials said that this effort would better enable federal decision makers to adhere to Executive Order 13112’s emphasis on a risk-based approach to dealing with invasive species. In addition, the officials said that the information generated by the task team would also help the National Invasive Species Council implement the national management plan, which calls for a risk-based approach to preventing potential invasive species from becoming established. Officials from the National Invasive Species Council staff and departments within the council agreed that improved economic analysis would help the federal government develop an overall budget for invasive species programs. However, they cautioned that the capacity of the federal government to do this work is limited. Specifically, there are limits to the data available on the biology of invasive species and the impacts they have—particularly on natural ecosystems—and the effectiveness of control methods. The officials also stated that there are not enough resources devoted to analyzing the impacts of invasive species. While the National Invasive Species Council’s 2001 management plan, Meeting the Invasive Species Challenge, calls for actions that are likely to help control invasive species, it lacks a clear long-term outcome and quantifiable performance criteria against which to evaluate the overall success of the plan. Federal officials recognize that there are deficiencies in the plan and are working toward improving it. At present, however, the only available performance measure that can be used to assess overall progress is the percentage of planned actions that have been completed by the due dates set in the plan. By this measure, implementation has been slow. Specifically, the council departments have completed less than 20 percent of the planned actions that were called for by September 2002, although they have begun work on others. A large majority of the members of the invasive species advisory committee who responded to our survey believe that the pace of implementation is inadequate. In addition, some of the actions that agencies have reported to the council are not clearly linked to coordinated implementation of the management plan. Our survey and other evidence indicate numerous reasons for the slow progress, including delays in establishing implementation teams that will be responsible for carrying out the planned actions, the low priority given to implementation by the council, and the lack of funding and shortage of staff responsible for doing the work. Another factor contributing to slow progress was the need to transition to a new administration. However, while the national management plan calls for many actions that would likely contribute to preventing and controlling invasive species, even if the actions in the plan were more fully implemented their effect would be uncertain because they typically do not call for quantifiable improvements in invasive species management or control. The national management plan does not clearly define a long-term outcome or measures of success as are called for by sound management principles. The executive order states that the management plan shall “detail and recommend performance-oriented goals and objectives and specific measures of success for federal agency efforts concerning invasive species.” Consistent with that requirement, the council and its advisory committee adopted as one of their guiding principles that efforts to manage invasive species are most effective when they have goals and objectives that are clearly defined and prioritized. Both the executive order and this guiding principle are also consistent with the Government Performance and Results Act of 1993, which emphasizes setting measurable goals and holding agencies accountable by evaluating performance against those goals. However, the council did not articulate in the plan a long-term outcome or condition toward which the federal government should strive. For example, the plan does not contain overall performance-oriented goals and objectives, such as reducing the introduction of new species by a certain percentage or halting the spread of established species on public lands. Instead, the plan contains an extensive list of actions that, while likely to contribute to preventing and controlling invasive species, are not clearly part of a comprehensive strategy. Similarly, many of the actions in the plan call for the federal departments to take certain steps rather than achieve specific results and do not have measurable outcomes. For example, the plan calls for the council, starting in January 2001, to work with relevant organizations to “expand opportunities to share information, technologies, and technical capacity on the control and management of invasive species with other countries.” Another action item calls for the council to have outlined by June 2001 a plan for a campaign to encourage U.S. travelers to voluntarily reduce the risk of spreading invasive species overseas. Other actions call for the council to support international conferences and seminars. We believe that these types of actions are more process-oriented than outcome-oriented. Taken individually, the actions may be useful, but it will be difficult to judge whether or not they are successful and have contributed to an overall goal. Respondents to our survey also raised concerns about the lack of measurability in the plan. While the majority of respondents (17 of 23) said that the plan is focused on the most important issues, 9 criticized it for a lack of specificity or a clear mechanism for measuring effectiveness or holding departments accountable for implementing it. Of these, several commented that it is unclear how we will know when actions are implemented and completed. Others noted that there are no consequences for the council, staff, or agencies if they miss deadlines. Other stakeholders made similar comments to us. For example, one person who was involved in the development of the management plan told us that it represents a “fundamentally misguided approach” and that it contains no coherent goal or measures of success. He said that the plan should have measures of success such as a reduction in the rate of introduction or spread of species. Another stakeholder said that the plan is unclear with regard to what actions would be enough to help solve the problem and echoed concerns about the difficulty measuring success. Eight respondents to our survey, however, made more positive comments about the degree of specificity in the plan, stating that the plan was clear, measurable, and achievable and that it had very specific actions with deadlines for agencies to implement. The council acknowledged in the plan itself that many of the details of the actions called for would require further development in the implementation phase. The Department of the Interior’s Deputy Assistant Secretary for Performance and Management told us that the plan was developed with little input from people trained in performance management processes. In addition, the Executive Director of the council staff told us that, in her opinion, given the scope of this first-time effort, it would have been unrealistic and difficult to also agree on specific measurable goals. She also said that in many areas, the federal government does not have the data on invasive species conditions needed to set long- term goals and develop better performance measures. She said that many of the actions called for in the management plan are designed to help develop needed data. In their comments on our draft report, EPA and the Department of the Interior also noted that it would be difficult to apply performance measures to invasive species management activities. The executive order calls for the council to revise the plan by January 2003. However, the Executive Director of the council told us that the council and the advisory committee had agreed not to begin revising the management plan until after the council prepares a progress report on the plan. That report is also due to the Office of Management and Budget (OMB) in January 2003. The council is in the process of working with OMB on implementing one of the planned actions that should help to establish a desired outcome and relevant performance measures. The plan called for a crosscut budget proposal for federal agency expenditures concerning invasive species beginning in fiscal year 2003. The council and OMB are hoping to have a proposal ready for the fiscal year 2004 budget cycle. According to the Department of the Interior official responsible for this project on behalf of the council, the proposal will represent the beginnings of a strategic plan for the federal government’s invasive species activities. It will be performance-oriented with common long-term goals, intermediate goals, and definitions for the relevant departments. OMB will identify performance measures with help from a task team of federal stakeholders and will initially focus on early detection and rapid response, control, and prevention. According to the council, the proposal for fiscal year 2004 will not represent the totality of invasive species expenditures or efforts but will primarily focus on the activities of the Departments of the Interior, Agriculture, and Commerce. While the council has not reported on implementation of the plan, we estimate that, as of September 2002, council departments had completed less than 20 percent of the actions that the plan had called for by that date. The departments have started work on other planned actions, including some that have a deadline after September 2002 and that the council believes are a high priority. When asked to assess implementation of the plan, 18 of the 21 advisory committee members who responded to that question said that the council was making inadequate or very inadequate progress. Survey comments and other evidence indicate various reasons for the lack of progress. Delays in implementing the plan will hamper agency efforts to prevent and control invasive species as intended by the executive order. It has been difficult to quantitatively measure the council’s progress in implementing the management plan because only 6 of the 10 member departments had submitted reports summarizing the steps they had taken to implement the plan. The plan calls for departments to submit such reports annually beginning in October 2001. Council staff aggregated the reports that were submitted into one summary of activities. These annual reports would be used to carry out yet another requirement of the executive order and management plan that calls for the council to revise the plan by January 2003. Several survey respondents commented that it was difficult for them to evaluate the council’s progress in implementing the plan because information from the council had been inadequate. For example, some respondents wrote that the level of interaction between them and the council was not sufficient, and that feedback to the advisory committee from the council on implementation progress has been poor. The management plan also calls for the council to establish a “transparent oversight mechanism” that engages public involvement. The purpose of the oversight mechanism would be for use by federal agencies in complying with the executive order and reporting on its implementation, which includes the management plan. The plan called for the mechanism to be in place by April 2001, but according to the council staff, work has not yet begun. Our review of the council’s summary of department actions, which focused on the 65 planned actions with due dates through September 2002 (an additional 21 planned actions have due dates after September 2002, for a total of 86), revealed that less than 20 percent of the actions due by September 2002 were complete. Several actions completed on time related to the development of the council’s Web site, which is found at www.invasivespecies.gov. Another completed action concerned a series of regional workshops on invasive species for policymakers that the council, led by the Department of State, cohosted with countries such as Brazil, Costa Rica, Denmark, Thailand, and Zambia. Also in accord with the plan, the National Oceanic and Atmospheric Administration, the Coast Guard, the Department of the Interior, and EPA have sponsored research related to ballast water management. Departments and the council staff have also started work on over 60 percent of the other planned actions, including some that have a due date beyond September 2002. For example, departmental representatives and the council staff are working with the President’s Council on Environmental Quality on guidance to federal agencies on how to consider the issue of invasive species as they prepare analyses required by the National Environmental Policy Act. However, the guidance is not expected to be ready until early 2003, past its August 2001 target date. USDA has begun work on additional regulations to further reduce the risk of species introductions via solid wood packing materials, but the department did not meet the management plan’s January 2002 deadline. (See fig. 5 for information on the Asian long-horned beetle, an invasive species that entered the United States in solid wood packing material.) Council departments have begun work on a national public awareness campaign— cataloging existing public awareness programs and conducting a survey of public attitudes toward invasive species—and are seeking budget approval for starting the campaign in fiscal year 2004. They missed the June 2002 completion date called for in the plan. Among those actions that the council is working on that are not due until after September 2002 is a risk-based comprehensive screening system for evaluating first-time intentionally introduced nonnative species, which is due by December 2003. According to council staff, the complexities of implementing a screening system dictate that the departments work on this now. According to council staff, work is also underway on a coordinated rapid response program due by July 2003. There are also actions in the plan that the council has not started to work on. For example, the council has not acted on the item in the plan that called for draft legislation by January 2002 to authorize tax incentives and otherwise encourage participation of private landowners in restoration programs. Nor has the council moved to ensure that a clearly defined process and procedures be in place by July 2001 to help resolve jurisdictional and other disputes regarding invasive species issues. Two respondents to our survey commented on the lack of council progress toward a resolution process, citing the need for it in cases such as one where federal agencies are taking contradictory actions with respect to an invasive rangeland grass (see fig. 6 for more on buffelgrass). In its comments on our draft report, EPA emphasized the significance of this deficiency and noted that there are other situations where a resolution process is needed, such as fish stocking to enhance recreational fisheries and using genetically modified organisms in aquaculture and agriculture. The majority of the advisory committee members responding to our survey noted the lack of progress made by the council agencies. Eighteen of the 21 members who responded to a question about implementation said that that the council was making inadequate or very inadequate progress. One noted that the only clear achievement to date is the council’s Web site. In our view, while it is apparent that the agencies are taking various actions to address invasive species issues, the actions the agencies have reported to the council often do not represent coordinated progress toward implementation of the plan or management of the problem. The executive order and the management plan both emphasized the need for coordination among agencies. As evidence of that emphasis, a majority of the actions in the management plan are to be carried out by multiple agencies. However, the actions that the agencies reported to the council often did not appear to be directly linked to each other or be directly responsive to the specific actions called for by the management plan. In our survey, several advisory committee members also commented that coordination has been inadequate. For example, the management plan called for the council to implement by January 2002 a process for identifying high-priority invasive species that are likely to be introduced unintentionally and for which effective mitigation tools are necessary. One agency noted to the council that it had contracted with professional societies to provide a list of the most harmful insect, weed, and disease plant pests that are not yet present in the country or present but not widely distributed. It also noted that it has a risk assessment procedure for identifying pests that may be introduced with commodities such as agricultural products. A second agency noted that it had held a workshop to identify potentially invasive species that might enter the nation’s waters from Eastern Europe. A third agency indicated that it is providing training for firefighters to reduce the spread of weeds from one fire site to another. While these activities are related to the planned action, they do not indicate that the agencies are working together through the council to implement a process for identifying high priority species as called for by the plan. The Executive Director of the council acknowledged that some of the actions reported by agencies did not seem to directly link to the management plan, although such information was useful for overall coordination purposes. She said that in the future implementation teams would help the agencies focus on those actions that are directly linked to the management plan. The Executive Director and one of the Assistant Directors of the council told us that they believe that increased coordination has been an important accomplishment and that agency officials are now routinely talking with each other about invasive species management issues. In comments on our draft report, the Department of the Interior also noted that coordination and communication among the agencies has increased. Our survey and other evidence indicate that the slow progress in implementing the management plan has been caused by a combination of factors, including delays in forming teams responsible for developing specific implementation plans, the lack of priority given to the plan by the council as a whole and by the departments individually, and insufficient funding specifically targeted to support the plan. Progress was also slowed by the need to transition from the previous administration to the current administration. In October 2000, before issuing the management plan, the advisory committee and council staff agreed that implementation teams made up of federal and nonfederal stakeholders were needed to put the management plan into action. The advisory committee members and council staff agreed that the teams should be under the auspices of the advisory committee and be closely aligned to the major sections of the management plan. Specifically, the teams would be responsible for “delivery” of the planned actions. For example, a prevention team would be responsible for guiding implementation of the actions relevant to prevention. However, for various reasons, most implementation teams were not formed until June 2002. Specifically: The Executive Director of the council told us that she did not believe it would have been appropriate to form the implementation teams until after the management plan was issued in January 2001. The change in administration then delayed action on implementing the plan by about 6 months because it took time for cabinet secretaries— the members of the council—and other political appointees to be nominated and confirmed; departments were ready to move forward with forming the implementation teams in the fall of 2001. By that time, the first term of all of the advisory committee members was approaching its end in November 2001 and because the advisory committee members were to be an integral part of the implementation teams, the Executive Director told us it did not make sense to form the teams until the next advisory committee was convened. Appointment of the second set of advisory committee members was delayed until April 2002 for a number of reasons, including the temporary loss of e-mail and regular mail delivery at the Department of the Interior. The second advisory committee held its first meeting in May 2002, and committee members and council staff decided that the implementation teams should not meet until after the advisory committee members had a chance to review the teams’ responsibilities and membership and discuss them at greater length at their next scheduled meeting in June 2002. In June 2002, nine implementation teams were created that largely mirror sections of the management plan (all but two of the teams will comprise federal and nonfederal members). The Executive Director of the council told us the decision to create implementation teams of federal and nonfederal members under the auspices of the advisory committee was in part in recognition of the importance of getting consensus from key stakeholders early in the implementation process. She told us that she recognizes that there are potential problems with the teams comprising a disparate group of federal and nonfederal stakeholders. Specifically, logistical problems in getting the teams together and disputes within the teams could delay the federal departments in taking action to implement the plan. She said that the council would have to monitor the teams closely to determine whether or not they are effective. The delay in establishing the implementation teams has hindered the agencies in achieving an important objective of both the executive order and the management plan—coordinated action. Several respondents to our survey commented that they had not seen adequate increases in the amount of coordination, and some pointed to the delays in forming the teams as a cause. One respondent thought that federal departments and agencies were continuing to pursue their own mandates and programs with only a cursory regard for the framework and coordination that the council attempts to provide. The Executive Director of the council told us that she expected coordination to improve as the implementation teams become established. In our view, the relationship of the advisory committee to the implementation teams has slowed progress on the plan and could continue to do so. While we understand why the council decided to form the implementation teams under the auspices of the advisory committee—to foster consensus among key stakeholders early in the implementation process—we believe that this decision may slow federal action. Specifically, it may be difficult for teams of federal and nonfederal stakeholders to put forth the concerted effort needed to implement the management plan. We are also concerned that it will be difficult to hold the departments accountable for carrying out the plan if they are relying upon the actions of teams with federal and nonfederal members. About one-half of the respondents to our survey criticized the council and the departments for not giving the plan a higher priority. For example, several noted that it did not appear that the council had positioned itself to take a leadership role in implementing the plan or that the plan was not a high priority on the agendas of the leaders of the council’s member departments. In addition, numerous survey respondents said that the individual departments needed to give the plan higher priority by providing better support in staff and resources. Our review of agencies’ performance plans (prepared pursuant to the Government Performance and Results Act) also indicates that implementing the management plan is not a high priority for individual agencies. We reviewed the performance plans of the three cochair departments on the council (the departments of Agriculture, Commerce, and the Interior), as well as those of the Department of Transportation, the Environmental Protection Agency, and agencies within the Department of the Interior (National Park Service, Bureau of Land Management, Fish and Wildlife Service, and Geological Survey). While most of the agencies’ performance plans describe activities intended to control or manage invasive species—and are therefore consistent with the national management plan—none of the plans specifically identified as a measure of performance implementing actions called for by the council’s plan. As one official from the Environmental Protection Agency told us, activities that are not in the agency’s performance plan do not receive a high priority. Nevertheless, the Department of the Interior official responsible for pulling together the crosscut budget for invasive species programs told us that he believes that process—because of its emphasis on performance measures—will help departments link the management plan to their performance plans. With regard to the notion that the council was not giving the plan a high priority, three of the 23 advisory committee members who responded to our survey commented on the absence of specific legislative authority establishing the council. One stated “the council needs to be approved legislatively so that they are their own entity with better options to act.” Another said “Congress or the President needs to make this a priority through legislation or funding. . . . Agencies need to be told this is a priority and given funding to accomplish their goals.” Because executive orders such as the one that established the council do not provide any additional authority to agencies, the Executive Director of the council noted that legislative authority for the council, depending on how it was structured, could be useful in implementing the management plan. Officials from USDA, the Department of Defense, and EPA who are departmental liaisons to the council also told us that legislative authority, if properly written, would make it easier for council departments to implement the management plan. The Congress has recently considered legislation that would give the council certain responsibilities; namely to provide input into decisions about allocating funds to local governments and other organizations for controlling invasive plants. However, the Executive Director of the council told us that such a requirement would be unworkable if the legislation did not also formally establish the council and a future administration decided to discontinue the executive order that created the council. The management plan calls for the council to conduct an evaluation by January 2002 of the current legal authorities relevant to invasive species. The council has not completed the evaluation. According to the plan, the evaluation is to include an analysis of whether and how existing authorities may be better utilized and could be used to develop recommendations for changes in legal authority. However, it does not state that the analysis should address whether the council itself is hampered in its mission by not having specific legislative authority that would allow it to direct its members to implement the national management plan. In the management plan, the council stated that many of the actions could be completed or at least initiated with current resources but that without significant additional resources for existing and new programs it would not be possible to accomplish the goals of the plan within the specified timeframes. The council also noted in its comments on the draft report that it believes the timeframes in the plan are optimistic given current resources. Two of the actions in the plan called for federal agencies to request additional funding for separate management functions through the annual appropriations process beginning in fiscal year 2003. According to a summary prepared by the council, the President’s budget request for invasive species activities in fiscal year 2003 was at least 23 percent more than was requested in fiscal year 2002 (although slightly less than Congress appropriated in fiscal year 2002). The council went on to say in the plan that estimates of the additional support required would depend on the details of implementation schedules developed by federal agencies and stakeholders. As we described above, however, the council and the advisory committee have only recently created the teams that will be responsible for working out the detailed plans for implementation. Therefore, it is unclear what additional resources are needed and whether the requested appropriations will be adequate to implement the plan. In response to several of the questions in our survey, advisory committee members cited the lack of funding as a key reason for poor implementation of the council’s management plan. (We did not independently assess the adequacy of funding.) Of the 18 who said that the council was inadequately implementing the plan, 9 said that funding was insufficient. A typical comment was that the council members need to make a better case to get Congress to support funding for an invasive species line item. Over 70 percent of the respondents to another question said that they knew of instances where federal agencies do not have the resources to carry out actions in the national management plan. While several respondents gave details on specific examples of where they believe federal agencies have underfunded invasive species programs, four others said that none of the agencies have the resources to implement the management plan in its entirety. In addition, 19 of the 21 respondents to one question said that the council had inadequate staff resources to serve the needs of the council. (The council has had a staff of five to seven people in the last 2 years.) One respondent said that the “level of funding now is token only and serves to support the most minimal staffing one can imagine for a national effort of this scale. It’s embarrassing.” Many of the respondents said that the council’s staff are working hard and doing the best that they can. However, respondents also commented that the staff is overwhelmed, faced with substantial obstacles, and is not sufficient to support both the council and advisory committee. Several respondents emphasized that the council staff should be larger to more effectively push for implementation of the management plan. Finally, the Executive Director of the council staff told us that, in her opinion, progress on the management plan was slowed by the transition to a new administration. High-level political appointments are often vacant for months during the transition from one administration to another. A senior official from the Department of the Interior pointed out in July 2002 that many key managers relevant to the crosscutting budget proposal had been in office only a few months because of the time required to nominate and approve political appointees. According to experts and agency officials we consulted, current efforts by the United States and Canada are not adequate to prevent the introduction of nonnative aquatic organisms into the Great Lakes via ballast water of ships, and they need to be improved. Compliance with regulations is high but nonnative aquatic organisms are still entering and establishing themselves in the Great Lakes ecosystem. U.S. and Canadian agency officials believe that they should do more to protect the Great Lakes from ballast water discharges. However, several time-intensive steps must be taken before the world’s commercial fleet is equipped with effective treatment technologies. In the meantime, the continued introduction of nonnative aquatic organisms could have a major economic and ecological impact on the Great Lakes. Since 1993, U.S. regulations have governed how vessels entering the Great Lakes from outside the Exclusive Economic Zone, a zone extending 200 nautical miles from the shore, must manage their ballast water. To be allowed to discharge ballast water into the Great Lakes, ships must exchange their ballast water before entering the zone and in water deeper than 2,000 meters. Exchanging ballast water before arriving in the Great Lakes is intended to serve two purposes: to flush aquatic organisms taken on in foreign ports from the ballast tanks and to kill with salt water any remaining organisms that happen to require fresh or brackish water. If a ship bound for the Great Lakes has not exchanged its ballast water in the open ocean it may hold the ballast in its tanks for the duration of the voyage through the lakes. Under some circumstances—such as bad weather making an open-ocean exchange unsafe—the Coast Guard may approve a ship master’s request to do the exchange in an alternative exchange zone in the Gulf of St. Lawrence. The U.S. Coast Guard, the Saint Lawrence Seaway Development Corporation, and the Canadian St. Lawrence Seaway Management Corporation inspect ships as they enter and travel through the St. Lawrence Seaway. The Coast Guard also inspects ships at U.S. ports throughout the Great Lakes. Data from the Coast Guard show that the percentage of ships entering the Great Lakes after exchanging their ballast water has steadily increased since the regulations took effect in 1993 and averaged over 93 percent from 1998 through 2001. (See fig. 7.) Representatives of the Coast Guard and the seaway corporations told us that the high exchange rate indicates that the current regulations for the Great Lakes are being effectively enforced. Experts have concluded, however, that numerous nonnative aquatic organisms have entered the Great Lakes via ballast water and established populations since the regulations were promulgated. (See fig. 8.) Two such species are the fish-hook water flea (Cercopagis pengoi), discovered in 1998, and an amphipod (a small crustacean) known as Echigogammarus ischnus, discovered in 1995. Experts have cited several reasons for the continued introductions of nonnative aquatic organisms into the Great Lakes despite the ballast water regulations. First, the Coast Guard has not applied the ballast water exchange regulations to ships with little or no pumpable ballast water in their tanks; approximately 70 percent of ships entering the Great Lakes during 1999 through 2001 were in this category. These ships, however, may still have thousands of gallons of residual ballast and sediment in their tanks. This could harbor potentially invasive organisms from previous ports of call and could be discharged to the Great Lakes during subsequent ballast discharges. Second, there are also concerns that exchanging a particular percentage of ballast water does not remove an equivalent percentage of organisms from the tank. The Environmental Protection Agency and the Aquatic Nuisance Species Task Force reported that ballast water exchange with open-ocean water flushed 25 to 90 percent and 39 to 99.9 percent, respectively, of the organisms studied. Researchers explain this range by pointing out that organisms in sediment at the bottom of the tanks may not be flushed out by an exchange. Third, there is some uncertainty regarding what percentage of the water in the tanks is actually flushed out during a typical ballast water exchange. When determining whether tanks have been flushed and refilled in the open ocean, the Coast Guard tests the new ballast water to see if it has a salt concentration of at least 30 parts per thousand. However, given uncertainties about the salinity of a ship’s original ballast water and evaporative losses that occur during transit, it is not clear from a basic salinity test what percentage of the original ballast water—and the potentially invasive aquatic organisms it may contain—has been removed. Fourth, there is growing concern that freshwater organisms may be able to survive the saline environment created by mid-ocean exchange. Certain organisms have a stage in their life cycle during which they are “resting eggs” or “cysts” and may be tolerant of salt water. Once discharged into the Great Lakes freshwater system, these organisms can regain viability. There are also examples of species—including alewives and the sea lamprey— that normally spend part of their lives in salt water and part in freshwater, but have been able to thrive despite being “locked” into the freshwater of the Great Lakes. In an effort to reduce the further introduction of nonnative species, the Saint Lawrence Seaway Development Corporation and its Canadian counterpart, the St. Lawrence Seaway Management Corporation, amended their joint regulations in February 2002 to require all commercial ships entering the Seaway system to comply with Great Lakes shipping industry codes for ballast water management. These codes contain “best management practices” that are intended to reduce the number of organisms in ballast tanks. Such practices include not taking on ballast at night—when marine organisms are more likely to be near the surface—and regularly cleaning tanks. According to experts we consulted, it will take many years to solve the problem of nonnative aquatic organisms arriving in ballast water. The Coast Guard is now working to develop new regulations that would include a performance standard for ballast water—that is, a measurement of how “clean” ballast water should be before discharge within U.S. waters. In May 2001, the Coast Guard requested comments on how to establish a ballast water treatment standard and offered for consideration four conceptual approaches. The agency received numerous comments showing a wide range of opinion. As a result, it issued an advanced notice of proposed rulemaking and another request for comments in March 2002 on the development of a ballast water treatment goal and an interim ballast water treatment standard. The Coast Guard is expecting to have a final rule ready for interdepartmental review by the fall of 2004 that will contain ballast water treatment goals and a standard that would apply not only to ships entering the Great Lakes but also to all ships entering U.S. ports from outside the Exclusive Economic Zone. Once the Coast Guard sets a new performance standard for how clean ballast water should be, firms and other entities will have a goal to use as the basis for developing and measuring treatment technologies. Government, industry, academia, and other nongovernment interests are investigating several technologies, including filtration, hydrocyclonic separation, and chemical and physical biocides such as ozone, chlorination, ultraviolet radiation, heat treatment, and vacuum. Each technology has its strengths and weaknesses. One major hurdle facing any technological solution is how to treat large volumes of water being pumped at very high flow rates. Container vessels and cruise ships, which carry a smaller volume of ballast water, may require different technologies than larger container vessels. As a result, it is likely that no single technology will address the problem adequately. To facilitate technology development, the Coast Guard and the Department of Transportation’s Maritime Administration are developing programs to provide incentives for ship owners to actively participate in projects designed to test treatment technologies. To help with technology development, the National Invasive Species Act created a ballast water demonstration program that funds select proposals to develop and demonstrate new ballast water technologies. Under this program, the National Oceanic and Atmospheric Administration and the U.S. Fish and Wildlife Service have funded 20 ballast water technology demonstration projects at a total cost of $3.5 million since 1998. Other programs also support research, such as the National Sea Grant College Program, which has funded nine projects totaling $1.5 million. In addition, the National Oceanic and Atmospheric Administration, through the National Sea Grant College Program, and the U.S. Fish and Wildlife Service announced on June 6, 2002, that they expect to make $2.1 million available in fiscal year 2002 to support projects to improve ballast water treatment and management. In conjunction with this program, the Department of Transportation’s Maritime Administration expects to make available several ships of its Ready Reserve Force Fleet to act as test platforms for ballast water technology demonstration projects. In fiscal years 2001 and 2002, Congress appropriated $550,000 to the Coast Guard for research and development related to ballast water management. In addition, EPA and the Coast Guard expect to contribute $210,000 to fund a 3-year study on the transfer of aquatic organisms in ballast water. Nonfederal researchers in industry and academia are also studying the content of ballast water and prospective treatment technologies. For example, a Canadian shipping company funded the installation of a treatment system on one of its ocean- going ships and allowed the Michigan Department of Environmental Quality to perform testing on the system. Once effective technologies are developed, another hurdle will be installing the technologies on the world fleet. New ships can be designed to incorporate a treatment system. Existing ships, on the other hand, were not designed to carry ballast water technologies and may have to go through an expensive retrofitting process. With each passing year without an effective technology, every new ship put into service is one more that may need to be retrofitted in the future. Public and private interests in the Great Lakes have expressed dissatisfaction with the progress in developing a solution to the problem of nonnative aquatic organism transfers through ballast water. An industry representative told us that she and other stakeholders were frustrated with the Coast Guard’s decision to make a second request for public comment on a treatment standard; she said they were anticipating that the agency would publish a proposal rather than another request for information. More broadly, in a July 6, 2001, letter to the U.S. Secretary of State and the Canadian Minster of Foreign Affairs, the International Joint Commission and the Great Lakes Fishery Commission stated their belief that the two governments were not adequately protecting the Great Lakes from further introductions of aquatic invasive species. They also noted that there is a growing sense of frustration within all levels of government, the public, academia, industry, and environmental groups throughout the Great Lakes basin and a consensus that the ballast water issue must be addressed now. The two commissions suggested that the re-authorization of the National Invasive Species Act is a clear opportunity to provide funding towards implementing research aimed at developing binational ballast water standards. The International Joint Commission recommended in its 2002 11th Biennial Report that the U.S. and Canadian governments fund research recommended by expert regional, national, binational panels, task forces, and committees. In an effort to prevent the introduction of nonnative aquatic organisms into their waters, several Great Lake states have considered adopting ballast water legislation that would be more stringent than current federal regulations. For example, the legislatures in Illinois, Minnesota, and New York are currently considering ballast water legislation that would, among other things, require ships to “sterilize” their ballast water—a standard that would exceed even those for drinking water. The Michigan legislature also debated a proposal that would have required ships to sterilize ballast water before discharge. The stringency of that proposed legislation was a result of one Michigan legislator’s frustration with the federal government’s slow progress in implementing an effective national plan to protect the Great Lakes from invasions through ballast water. The bill that passed into law in Michigan, however, has requirements similar to those in the federal program. Citing inadequacies in the United States’ regulatory program, an environmental organization petitioned EPA in 1999 on behalf of 15 nongovernmental, state, and tribal organizations to address ballast water discharges under the Clean Water Act. The petition asked the agency to eliminate the exemption that currently excludes ballast water discharges from regulation under its National Pollutant Discharge Elimination System program. Eighteen members of Congress followed the petition with a letter also requesting that the agency examine whether the Clean Water Act could be used to provide effective regulation of nonnative aquatic organisms in ballast water. In its September 10, 2001, draft response to the petition and the congressional letter, the agency concluded that the exemption should not be lifted because regulation of ballast water discharges under the Clean Water Act would be more problematic than the process already in place under the National Invasive Species Act. The agency asserted that issuing uniform discharge requirements would require significant federal and state agency resources and would not necessarily provide protection greater than the National Invasive Species Act. The agency also stated that the using the Clean Water Act would likely subject ship operators to multiple and potentially different state and federal regulatory regimes. On the international level, the United States is also an active member of the International Maritime Organization (IMO), a specialized United Nations agency that is also addressing ballast water management. In 1997, the organization adopted “Guidelines for the Control and Management of Ships’ Ballast Water to Minimize the Transfer of Harmful Aquatic Organisms and Pathogens.” The IMO requests that all maritime nations adopt and use these voluntary guidelines that call for, among other things, open-ocean ballast water exchange. Member nations are also working toward an international convention to address ballast water management. According to a State Department official who is a member of the U.S. delegation to the IMO, the organization is developing a new convention for possible adoption in the fall of 2003. The State Department official told us that the convention would probably include ballast water exchange as an interim method and would likely include provisions for modifying the performance standard over time to correspond with and spur improvements in technology. Even if a convention were available for signature in the fall of 2003, it would take some years for it to enter into force and for effective treatment technologies to be installed on the world fleet. Recognizing the time needed to develop and install new technologies, the Coast Guard has suggested to the Marine Environment Protection Committee that the date by which ships must meet a new performance standard be 10 years after the organization adopts a convention (in this case, 2013). Although no estimates have been made, using the past as a guide, the continued introduction of nonnative aquatic organisms into the Great Lakes could have significant economic and ecological impacts on the Great Lakes basin. In a May 2001 report, the International Joint Commission noted that the past and ongoing economic impacts of invasive species introductions to the Great Lakes region represent hundreds of millions of dollars annually. As a result, experts dread the introduction of the “next zebra mussel.” The zebra mussel was introduced to the Great Lakes in 1988 and is continuing to wreak havoc on the ecosystem and surrounding economies. Zebra mussel control measures alone are estimated to have cost municipalities and industries $69 million from 1989 through 1995. (See fig. 9 for more on the zebra mussel.) Such fears appear to be well founded because scientists predict that additional invasions will occur if effective safeguards are not placed on the discharge of ballast water from ocean-going ships. We have discussed two species and listed others that have been introduced since ballast water regulations were implemented. (See fig. 10.) In addition, scientists have identified 17 species from the Ponto-Caspian region (Caspian, Black, and Azov Seas) of Eastern Europe alone that have a high invasion potential, are likely to survive an incomplete ballast-water exchange, and are considered probable future immigrants to the Great Lakes. The continued introduction of nonnative aquatic organisms could further damage a U.S. and Canadian Great Lakes sport and commercial fishing industry that is valued at almost $4.5 billion annually and supports approximately 81,000 jobs. Aggressive fish that have invaded the lakes in the past (such as the sea lamprey, the Eurasian ruffe, and the round goby) have harmed native fish by directly preying either on them or on their food supply. Two of the potential species from the Ponto-Caspian region, the amphipods Corophium curvispinum and Corophium sowinskyi, could significantly alter biological communities along shorelines and food chains in North American river systems. Invasive species can also carry parasites and pathogens that could affect existing fish populations. For instance, fish pathologists fear that continued introductions of species such as the Eurasian ruffe may facilitate the introduction of new and potentially harmful parasites and pathogens, such as viral hemorrhagic septicemia, a serious disease of rainbow trout in Europe that could affect North American fish populations. Ballast water is also known to carry human pathogens, although the risks they pose to human health has not been determined. One study performed during the 1997 and 1998 shipping seasons sampled ballast water in ships passing through the St. Lawrence Seaway en route to ports in the Great Lakes. Human pathogens, such as fecal coliform, fecal streptococci, Clostridium perfringens, Escherichia coli, and Vibrio cholerae, as well as multiple species of Cryptosporidium, Salmonella, and Giardia, were detected in the samples. According to the Coast Guard, these organisms are also found in bodies of water that are influenced by human development. The United States and Canada participate in a variety of bilateral and multilateral efforts to share information, conduct research, and coordinate efforts to reduce the threat of invasive species. The two countries’ long history of coordination has focused on particular segments of the issue such as shared boundary waters and agricultural research, and stakeholders have called for a more comprehensive strategy for joint prevention and management efforts. The National Invasive Species Council recognized the need for the United States to work with Canada (and Mexico) in a more comprehensive manner and has taken initial steps to develop a North American strategy as called for by the national management plan. It is too early to tell, however, what form a North American strategy will take or how existing organizations will be integrated. Historically, coordination between the United States and Canada has focused on specific pathways, species, or geographic areas rather than on a comprehensive coordinated approach. Primary examples of this coordination concern shared boundary waters and agriculture. One mechanism for coordination is the International Joint Commission, which was established by the Boundary Waters Treaty of 1909. The treaty established the commission to advise the U.S. and Canadian governments concerning issues along the boundary and approve certain projects in boundary and transboundary waters that affect water levels and flows across the boundary. The commission has focused much of its attention on the Great Lakes. The purpose of the 1978 Great Lakes Water Quality Agreement between the United States and Canada is to “restore and maintain the chemical, physical, and biological integrity of the waters of the Great Lakes Basin Ecosystem.” The International Joint Commission’s role with respect to the agreement includes evaluating and assessing the two countries’ programs and providing a report at least every 2 years that presents its findings, advice, and recommendations. Recent reports have contained recommendations to the governments on how to reduce the flow of invasive species through ballast water. Protection of the Great Lakes fisheries against the nonnative sea lamprey was a motivating factor behind the creation of the Great Lakes Fishery Commission in 1955 in the Convention on Great Lakes Fisheries between the U.S. and Canada. The fishery commission, which is jointly funded by the two countries, has been largely successful in controlling, although not eradicating, the sea lamprey. Another primary objective of the fishery commission is to formulate a research program or programs to determine the need for measures to make possible the maximum sustained productivity of fish of common concern. One of the commission’s goals is to ensure that no nonnative fishes will be unintentionally introduced into the Great Lakes. The commission has stated that it will intensify its work with partners to address those vectors for invasive species, such as ship ballast water, that pose the greatest threat to the lakes. Another mechanism that has promoted coordination between the United States and Canada is the establishment of regional panels to address aquatic invasive species. The Nonindigenous Aquatic Nuisance Prevention and Control Act of 1990 authorized the establishment of the Great Lakes Panel on Aquatic Nuisance Species, which comprises U.S. and Canadian public- and private-sector representatives. Its activities include identifying Great Lakes priorities for aquatic nuisance species, coordinating information and education efforts, making recommendations to the federal government, and advising the public about control efforts. Two other U.S. panels recently established under the National Invasive Species Act of 1996 in the West and the Northeast also include Canadian members. As noted earlier, the United States and Canada are also working together on managing ballast water coming into the Great Lakes through the St. Lawrence Seaway. Cooperative efforts by the two countries were most recently demonstrated by the joint decision of the United States’ Saint Lawrence Seaway Development Corporation and Canada’s St. Lawrence Seaway Management Corporation to require all ships entering the seaway to follow established best management practices. There has also been a long history of coordination between the U.S. and Canada in the area of agricultural research and pest control. As we reported in July 2002, for over 30 years the two countries and Mexico have held regular meetings on animal health issues to make North America’s import requirements consistent and, more recently, to coordinate preventive actions and emergency response activities in the event of an outbreak of the nonnative foot-and-mouth disease. In 2000, the three countries held joint exercises to test their foot-and-mouth disease communication and response plans and to assess their response systems. As a result of this exercise, the three governments signed a memorandum of understanding to formally establish the North American Animal Health Committee. According to USDA, the United States and Canada have also worked very closely in the past several years on jointly assessing the threat from two other foreign animal diseases—bovine spongiform encephalopathy (also known as “mad cow disease”) and chronic wasting disease. Another emerging animal and public health issue that the United States and Canada have worked together on is the West Nile virus, which is transported by migratory birds and by insects such as mosquitoes. (See fig. 10 for more details on the virus.) To further strengthen communication and collaboration on invasive species and trade-related matters, USDA’s Animal and Plant Health Inspection Service established an office in Ottawa, Canada, in 2000. The office oversees a preclearance program throughout Canada that conducts inspections, treatments and/or other mitigation measures in Canada to identify and/or mitigate the risk of exotic pest introductions via agricultural commodities before the commodities are cleared through the U.S. Customs Service. Another vehicle for coordination in the agriculture sector is the North American Plant Protection Organization, created as a regional plant protection organization under the International Plant Protection Convention of 1951. The convention called for the governments to establish regional plant protection organizations responsible for coordinating activities under the convention, such as the development and promotion of the use of international phytosanitary certificates. For example, through the plant protection organization, the United States, Canada, and Mexico worked together to develop a standard for treating solid wood packing materials. According to USDA, the United States and Canada are also working together to develop an international standard for evaluating the environmental impact of invasive species. This standard, which the USDA expects to be adopted by the International Plant Protection Convention in 2003, would provide a common framework for assessing the invasive potential of pests and thereby ensure a more rigorous but common approach to dealing with them. While there are numerous examples of coordination between the United States and Canada on invasive species control, some stakeholders in this issue believe that not enough is being done. For example, in June 1999, the Great Lakes Panel on Aquatic Nuisance Species wrote that there was a lack of inter-jurisdictional consistency in laws, regulations, and policies directed at aquatic nuisance species prevention and control efforts, and that improvements were needed to ensure a more efficient and effective regional prevention and control program. As noted previously, the International Joint Commission stated its belief that the two governments were not adequately protecting the Great Lakes from further introduction of aquatic invasive species and it made several recommendations regarding a binational approach to better management. In addition, according to EPA, there are numerous locations where there is a need for continuing regional cooperation to address aquatic invasive species in binational waterways, including the St. Croix River of New Brunswick and Maine; Lake Champlain of Quebec, Vermont, and New York; the Red River of North Dakota, Minnesota, and Manitoba; the Souris River of Saskatchewan, Manitoba, and North Dakota; and the Georgia Basin- Puget Sound of British Columbia and Washington. For example, in the Red River watershed of North Dakota, a proposed water diversion could introduce nonnative species into new locations. An official from EPA’s Office of International Affairs told us that, in his opinion, having an overarching policy with respect to aquatic invasive species along the border would help better address these situations more quickly or avoid them completely. The National Invasive Species Council’s Assistant Director for International Policy, Science, and Cooperation told us that she believes that the United States could expand two existing interagency organizations—the Federal Interagency Committee for the Management of Noxious and Exotic Weeds and the Aquatic Nuisance Species Task Force—to include Canadian representation, or that Canada should be encouraged to develop similar organizations. She said this would make it much easier to establish dialogue between officials with similar responsibilities. The council’s Assistant Director also said she thought that the National Oceanic and Atmospheric Administration’s Sea Grant Program could be more effectively used to support educational programs developed and implemented in the United States and Canada. She noted that because tourists frequently cross the border to and from Canada it is important to address this pathway with a common education strategy. In this same vein, while we reported in August 2002 that the United States, Canada, and Mexico have worked to coordinate animal health measures, we also noted that there are differences in the countries’ policies and practices with regard to foot-and- mouth disease that could contribute to the risk that travelers may bring foreign animal disease across our mutual borders. The National Invasive Species Council recognized the need for the United States to work with Canada (and Mexico) in a more comprehensive manner. The management plan called for the council to outline an approach to a North American invasive species strategy by December 2001. The strategy was to be built upon existing tripartite agreements and regional organizations. The plan also called for the council to initiate discussions with Canada and Mexico for further development and adoption of the strategy. The council has taken initial steps but has not completed this planned action. The council established the North America Strategy task team in January 2002. It comprises federal and nonfederal stakeholders, and is cochaired by the Department of State, the Environmental Protection Agency, and the Fish and Wildlife Service. In March 2002, the Department of State sent a cable to United States embassy staff in Canada and Mexico requesting that they notify officials in those two countries of the federal government’s desire to develop a North American strategy. According to one U.S. official involved in this project, Canadian representatives have responded positively to the idea. In the time since it sent the memorandum, however, the team has done little to develop the strategy. The council staff and the advisory committee placed the team into a holding pattern in May 2002 when they decided that all of the implementation teams needed to be reviewed by the advisory committee. According to one of the cochairs of the team, among other things that the team will need to do is identify the objectives of the U.S. participation in the various North American organizations and determine what actions are being taken. Two other multilateral organizations provide opportunities for a more comprehensive approach to an invasive species strategy across North American borders but do not have significant resources dedicated to the issue. The North American Commission on Environmental Cooperation, which is governed by a council composed of the Administrator of the United States Environmental Protection Agency, the Minister of the Environment in Canada, and the Secretary of the Environment and Natural Resources in Mexico, provides an opportunity for the United States and Canada to research and develop strategic plans for common ecosystems such as northern forests, grasslands, and aquatic ecosystems. One objective in its 2001 draft Strategy for the Conservation of Biodiversity in North America, is to promote the development of concerted efforts to combat invasive species in North America. In March 2001, participants at a workshop sponsored by the commission recommended five priority areas for cooperation in North America on invasive species. Because of limited resources, however, the commission has decided to proceed with just one of those areas—identifying invasive species and invasion pathways that are a concern of two or more countries (within North America)—and determine priorities for bi- or tri- lateral cooperation. The Trilateral Committee for Wildlife and Ecosystem Conservation and Management is composed of the wildlife agencies from the United States, Canada, and Mexico, and also has the ability to look at approaches for managing invasive species more broadly. The committee has not analyzed invasive species in depth, although the issue was on its meeting agenda in April 2002 in order to set it as a topic for discussion at a later meeting. According to a State Department official who attended the meeting, the committee decided to add invasive species to the portfolio of the “working table” on biodiversity information. While the available data are often inadequate to thoroughly describe the costs and risks associated with invasive species, it is apparent that their impacts on our environment and, thus, our economy are significant. At the same time, because of limitations in both the quantity and quality of economic impact analysis, it may not be readily apparent to decision makers in the federal government how they should most effectively allocate limited resources to prevent and manage invasive species. It is encouraging that the National Invasive Species Council and OMB are working on a crosscut budget that the federal government can use to plan resource allocations to and among departments. Such decisions would be better informed by information and data on the risk that nonnative species will enter the country, become established, spread, and cause harm. The ballast water management situation is a prime example. The federal government faces decisions about dedicating resources to fund ballast water technology research or standard setting, and ultimately about imposing more protective regulations. Decision makers could weigh the costs of those activities against the potential costs of the next zebra mussel or sea lamprey to arrive in U.S. waters, if such data were readily available. Moving ahead with a comprehensive management plan to combat invasive species is clearly in the national interest. It also poses a daunting challenge. Success in this effort will depend in no small part on crafting a plan that calls for clearly defined, measurable outcomes and has a mechanism in place to hold departments accountable for carrying it out. The National Invasive Species Council now has the opportunity to improve upon its management plan in a revision due in 2003. Successful implementation of the plan depends in part on the members of the council making it a priority within their own departments and agencies and, recognizing the enormity of the task ahead, developing estimates of the resources needed. Statements from various stakeholders suggest it is possible that federal agencies could better coordinate their efforts to implement the management plan if the Congress established the council in legislation. The management plan states that the council will conduct an analysis of legislative authorities relevant to invasive species. We believe that the evaluation should also examine the question of whether the lack of legislative authority establishing the council is hampering the council in its efforts to implement the national management plan. To better manage the threats posed by invasive species in the United States, we recommend that the cochairs of the National Invasive Species Council—the Secretaries of Agriculture, Commerce, and the Interior— direct council members to: Include within the revision to the National Invasive Species Management Plan a goal of incorporating information on the economic impacts and relative risks of different invasive species or pathways when formulating a crosscuting invasive species management budget for the federal government. Such a goal may require a commitment from the council to ensure that adequate resources are dedicated within the federal government to expand the capacity for conducting appropriate economic analysis. Ensure that the updated version of the national management plan, due in January 2003, contains performance-oriented goals and objectives and specific measures of success. Give a high priority to completing planned action #1, which calls for establishing a transparent oversight mechanism for use by federal agencies in complying with Executive Order 13112 and reporting on implementation of the management plan. Include in its planned evaluation of current legal authorities an examination of whether the lack of legislative authority establishing the National Invasive Species Council and specifically directing its members to implement the national management plan hampers the council’s efforts to implement the plan. To better ensure the implementation of the national management plan, we recommend that the members of the National Invasive Species Council who are responsible for taking actions called for in the plan recognize their responsibilities in either their departmental- or agency-level annual performance plans. The annual performance plans and performance reports should describe what steps the departments or their agencies will take or have taken to implement the actions that are specifically called for in the national management plan. For the existing (2001 version) of the national management plan, the member departments to which this applies include the Departments of Agriculture, Commerce, Interior, Defense, State, and Transportation, and the Environmental Protection Agency. We provided copies of our draft report to the Departments of Agriculture, Commerce, Defense, Treasury, State, Transportation, and the Interior; the Environmental Protection Agency; the U.S. Trade Representative; and the National Invasive Species Council. We received written comments from the Department of the Interior, the Department of State, the Environmental Protection Agency, and the National Invasive Species Council. We received oral comments from the Departments of Transportation, Agriculture, and the Treasury. The written comments from the Department of the Interior, the Department of Agriculture, the National Invasive Species Council, and EPA are in appendixes II through V. The Department of the Interior concurred with the recommendations in the report and said that it would work with the other cochairs of the National Invasive Species Council to implement the recommendations in a timely manner consistent with current budget and authority. While agreeing with the recommendations, the department expressed the view that our draft report did not adequately acknowledge the extensive invasive species activities that federal agencies are doing outside of what is called for by the national management plan. We agree that federal agencies are engaged in other invasive species management activities and have described many of them in prior reports. A principle objective of this review, however, was to assess the implementation of the national management plan, and not all federal activities. The department also commented that it believes that the Fish and Wildlife Service, the National Oceanic and Atmospheric Administration, and the Maritime Administration are demonstrating substantial progress in developing technologies to treat ballast water. We agree that progress is being made, but continue to believe that much important work remains to be done. To illustrate this, we reported the Coast Guard’s estimate that it may be at least 10 years before ships must meet a new performance standard for ballast water treatment, a step critical to real progress. The department suggested several other minor changes that we have incorporated where appropriate. The Department of State commented that it did not fully concur with our finding that the slow progress on the national management plan is due to lack of priority given to the plan by the Council and departments. The department claimed that it places a high priority on accomplishing the goals of the management plan, and it itemized numerous activities in support of that statement. We do not disagree with the department’s claims. However, we did not evaluate the efforts or progress of one department versus another; instead, we evaluated implementation of the management plan overall. The letter from the Department of State also included comments from the International Joint Commission. The commission suggested that we include a recommendation that the federal government work with Canada to develop an effective approach to immediately improve the management of all ballast waters coming into the Great Lakes. Our report describes the current and expected situation with respect to ballast water in the Great Lakes. We believe that the decision to take more immediate action to solve the problem is a policy decision best left to the Congress or the administration. The commission also suggested that we ask the Congress to consider completing reauthorization of the National Invasive Species Act. While we recognize the importance of the commission’s suggestion, we did not evaluate the current proposal to reauthorize the act. The department and the commission also offered minor corrections, which we have made. The National Invasive Species Council concurred with our recommendations but made several clarifying comments. In particular, it noted that the management plan’s deadlines were optimistic and suggested that we should have evaluated whether the deadlines were realistic or attainable. We believe that an assessment of its deadlines is an appropriate task for the council when it revises the management plan. In addition, the council commented that the report undervalued the progress being made toward coordination and cooperation among federal agencies and gave examples of such activity. We acknowledge that coordination between departments has increased as a result of the creation of the council and the management plan, and we have added language to support this point. Nevertheless, the report provides support for the position that improvement can still be made in this area. Finally, the council made other minor comments that we have incorporated where appropriate. The Environmental Protection Agency commented that our recommendations were reasonable and believes that their implementation would enhance the federal government’s response to dealing with the problem of invasive species. The agency also noted that the report is well written and helpful in assessing the progress made in coping with invasive species. The agency also made several clarifying comments that we have incorporated where appropriate. The agency questioned whether we should have based our conclusions about the pace of implementation of the management plan solely on the results of our survey of the members of the first term of the advisory committee, given the small size of the population and their possible biases. We did not draw our conclusions about the pace of implementation solely, or even primarily, from the survey. Our statement that less than 20 percent of the plan has been implemented is based on our analysis of information from the National Invasive Species Council staff and the council’s member departments. EPA also noted that the report’s section on ballast water focused on the Great Lakes and pointed out that work is being done and needs to be done in other parts of the country. We agree that ballast water is an important issue in other parts of the country. However, our objective, as part of our coordinated review with the Canadian Office of the Auditor General, was to focus on the Great Lakes. Finally, EPA made a number of technical clarifications that we have incorporated, where appropriate, in the report. The invasive species coordinator for the Department of Agriculture said that our comments on the implementation of the national management plan were fair and on target. This official also provided two minor clarifying comments that we have incorporated. The Department of Transportation’s Director for Performance Planning in the Office of Budget and Program Performance provided oral comments on the draft. He told us that the department disagreed with our draft recommendation calling for the members of the National Invasive Species Council to incorporate the national management plan into their annual performance plans. He said that the department does not believe that it is appropriate to include performance goals with respect to invasive species in its performance plan because managing invasive species is not one of its core missions. In addition, he told us that the agencies within the department that have a more direct role with respect to invasive species, such as the Coast Guard, Maritime Administration, and Federal Highway Administration, are at liberty to include invasive species management goals in their annual performance plans. In response to this comment, we modified the wording of the recommendation to specify that the national management plan should be addressed in the most appropriate annual performance plan, whether at the departmental level or the agency level. The department also commented that there are many mechanisms other than ballast water by which invasive species are introduced into the environment. We agree, and noted some of them in the report. However, our objective specifically focused on the issue of ballast water in the Great Lakes. A representative with the Office of Planning in the Department of the Treasury's U.S. Customs Service told us that because the current national management plan does not call for the Customs Service to undertake significant activity on invasive species, it does not believe that it is appropriate for it to address the management plan in its annual performance plan as called for in our recommendation. We acknowledge that the current plan does not have action items directed to the Customs Service, and we modified our recommendation to clarify its applicability to those member agencies that are specifically responsible for action items in the existing (2001) national management plan. If future versions of the plan specify action items for other agencies, we would encourage them to follow the same practice with regard to their department- or agency-level annual performance plans. The Customs Service made no technical comments. We are sending copies of this report to the other members of the National Invasive Species Council: the Secretaries of State, Defense, Transportation, Health and Human Services, and Treasury, and the Administrators of the Environmental Protection Agency and the U.S. Agency for International Development. We are also sending copies of this report to the Chairmen and Ranking Minority Members of the following congressional committees: the Senate Committee on Agriculture, Nutrition, and Forestry; the Senate Committee on Commerce, Science, and Transportation; the Senate Committee on Environment and Public Works; the Senate Committee on Energy and Natural Resources; the Senate Committee on Foreign Relations; the Senate Committee on Appropriations; the House Committee on Agriculture; the House Committee on Resources; the House Committee on Science; the House Committee on Transportation and Infrastructure; the House Committee on Energy and Commerce; the House Committee on International Relations; and the House Committee on Appropriations. We will make copies available others upon request. This report is also available on our Web site at www.gao.gov. If you have any questions concerning this report, I can be reached at (202) 512-6878. Major contributors to this report include Trish McClure, Ross Campbell, Patrick Sigl, Don Cowan, Anne Stevens, and Amy E. Webbink. To determine the usefulness to decision makers of economic impact studies for invasive species in the United States, we reviewed economics and other policy literature that analyzes invasive species’ effects on the U.S. economy and ecosystems. We also reviewed the literature that describes and evaluates U.S. regulatory policies for invasive species. We paid particular attention to the literature that evaluates how well cost- benefit analyses of invasive species’ effects, and of regulatory policies to control them, have been adjusted to reflect uncertainties and risks associated with these assessments. To further determine the usefulness of the existing studies, we selected and interviewed experts, including some authors of studies, and government officials involved in both authoring and using the economic impact studies. We identified these experts through our literature search. To assess the National Invasive Species Management Plan, including the extent to which the United States government has implemented it, we first analyzed the content of the plan in relation to the requirements spelled out in Executive Order 13112. In particular, we analyzed the extent to which it contained “performance-oriented goals and objectives and specific measures of success for federal agency efforts concerning invasive species.” The plan contains 57 enumerated actions. However, several of those actions have distinct subparts. In consultation with council staff, we agreed that there are a total of 86 distinct actions called for by the plan. To evaluate the extent to which the plan has been implemented, we focused primarily on those actions that had a start or completion date of September 2002 or earlier. There are 65 actions in that category. To determine whether actions had been completed, were in progress, or had not been started, we relied on the National Invasive Species Council’s summary of agency progress, materials provided to us by agency officials, and interviews with council staff and agency officials. For those actions that had been started but not completed, we did not attempt to characterize the extent to which they had been completed. In only a few instances did we attempt to determine when incomplete actions would be complete. To assist in our evaluation of the plan and our assessment of its implementation, we surveyed the 32 people serving on the Invasive Species Advisory Committee for a 2-year term beginning in December 1999. We had several reasons for surveying this group: (1) they participated in developing the national management plan; (2) they represented a wide range of interests relevant to the invasive species issue; and (3) by virtue of their professions and their involvement with the committee, they were likely to have information and opinions on how the management plan was being implemented. The Secretary of the Interior reappointed 15 of these 32 people for another term on the advisory committee beginning in April 2002. One of the members of the original advisory committee told us that he had resigned from the committee partway through his term and did not believe that he was informed enough about events surrounding the council, the committee, or the management plan to respond to our survey. Therefore, for the purposes of calculating a response rate, we are using 31 as the size of our survey population. Twenty-one of the 31 members of the committee completed our survey, while 2 others completed a small portion of the survey. Therefore, while the response rate was 74 percent, the completion rate was 68 percent. Thirteen of the 15 people reappointed to the committee responded to the survey. The survey instrument contained questions that asked for either numerical or open-ended answers. The survey, including a tally of the numerical answers, is in appendix IV. Because we did not take a sample of the committee members, the numerical answers are presented as a straight percentage of the total number of respondents. There are no error rates associated with the results. We did not reprint the open-ended answers in the report because they are too numerous and lengthy. To determine the experts’ views on the adequacy of U.S. and Canadian efforts to control the introduction of invasive aquatic species into the Great Lakes via the ballast water of ships, we selected and interviewed experts from various stakeholder interests. We identified experts through a literature search and by soliciting the names of other expert contacts throughout our review. In the end, we contacted experts from U.S. federal agencies, academic institutions, and the shipping industry. We also met with staff from two binational agencies—the International Joint Commission and the Great Lakes Fishery Commission—and with representatives of the Great Lakes Commission. In addition, we attended a conference on aquatic nuisance species to obtain opinions from a range of stakeholders on ballast water and associated shipping vectors. To describe the current management of ballast water in the Great Lakes, we researched U.S. and Canadian legislation, regulations, and guidelines. In order to determine the compliance rate and effectiveness of the current regulatory regime for the Great Lakes, we obtained compliance and other data from the Coast Guard Marine Safety Detachment and the Saint Lawrence Seaway Development Corporation in Massena, New York. The Saint Lawrence Seaway Development Corporation also showed us the U.S. ballast water inspection procedures on a vessel docked in Montreal, Canada, and bound for the Great Lakes. We also reviewed studies on the introduction of nonnative aquatic organisms traced to ballast water, paying particular attention to those that have invaded after the ballast water regulations for vessels entering the Great Lakes took effect in 1993. We interviewed both United States and Canadian scientists on the significance of the continued invasions since 1993. For the international perspective on ballast water management, we reviewed the history and development of the current International Maritime Organization policies and guidelines. We also met with members of the U.S. delegation to the organization to determine the status of negotiations on a future international agreement related to ballast water. These officials represent the United States on the Marine Environmental Protection Committee and lead the correspondence group that is tasked with developing a performance standard for the future International Maritime Organization Convention on ballast water management. To describe coordination between the United States and Canada, we interviewed officials from departments in the National Invasive Species Council to determine if their departments were involved in any significant efforts to coordinate with Canadian officials on invasive species management. From these discussions, we learned that coordination efforts on a binational (or in some cases trinational) level have focused primarily on shared boundary waters and agriculture. We obtained further information from the relevant departments on the nature of those coordination efforts. To learn more about how nonfederal organizations can play a role in coordinating the work of the two countries, we interviewed and obtained documents from officials representing the International Joint Commission, the Great Lakes Fishery Commission, the Great Lakes Commission, and the North American Commission on Environmental Cooperation. We also obtained documentation that described relevant work being done by the International Plant Protection Organization, the North American Plant Protection Organization, the North American Animal Health Committee, and the Trilateral Committee for Wildlife and Ecosystem Conservation and Management. Finally, we relied on previous GAO work on foot-and-mouth disease. In choosing invasive species to profile, we judgmentally selected species that (1) illustrate problems in a variety of environments (aquatic, terrestrial, managed, and natural areas), (2) are drawn from a wide variety of taxonomic groups (vertebrate, invertebrate, virus, and plant), (3) include some that are well known by the public and others that are not, and (4) provide a selection whose distribution collectively covers a large portion of the United States. We collected and reviewed data on the species from federal agencies, academic institutions, and previous GAO reports. We obtained photographs of species from the U.S. Geological Survey and USDA. We conducted our review from November 2001 through September 2002 in accordance with generally accepted government auditing standards. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to daily E-mail alert for newly released products” under the GAO Reports heading.
Harmful invasive species--nonnative plants and animals that are spreading throughout the United States--have caused billions of dollars in damage to natural areas, businesses, and consumers. In 2001, the federal government issued a National Invasive Species Management Plan to focus attention on invasive species and coordinate a national control effort involving the 20 or so federal agencies that are responsible for managing them. This report discusses the economic impacts of invasive species, implementation of the management plan, and coordination of U.S. and Canadian efforts to control invasive species, including those introduced to the Great Lakes via the ballast water of ships. Existing literature on the economic impacts of invasive species is of limited usefulness to decision makers, although it indicates that the effects of invasive species are significant. Most economic estimates do not consider all of the relevant effects of nonnative species or the future risks that they pose. New initiatives may prompt more comprehensive analysis that could help decision makers make better resource allocations. While the National Invasive Species Management Plan calls for many actions that are likely to contribute to preventing and controlling invasive species in the United States, it does not clearly articulate specific long-term goals toward which the government should strive. In addition, the federal government has made little progress in implementing the actions called for by the plan. Even with high levels of compliance, U.S. regulations have not eliminated the introduction of invasive species into the Great Lakes via the ballast water of ships. The United States and Canada are working on strengthening the existing control system, but developing stronger regulations and the technology needed to meet them will take many years. The continued introduction of invasive species could have high economic and ecological costs for the Great Lakes.